Stereoscopic rendering using OpenGL and GeForce
Hi there.

I'm new here and i want to say sorry in advance if this subject was discussed already, but i would like to ask you for a clear answer.

Is there ANY possibility to use 3d Vision stereoscopic technology with OpenGL not DX library and GeForce not Quadro chip?

I'm a member of developers' team, and we're working on application for home users, using stereoscope rendering. We need to render two frames, separately for left and right eye, and then display it in stereoscopic mode. Is there any possibility to do it like in case of DX? Render both images, create a special texture for both, add special header etc?



Greetings and thanks in advance,
Zegar
Hi there.



I'm new here and i want to say sorry in advance if this subject was discussed already, but i would like to ask you for a clear answer.



Is there ANY possibility to use 3d Vision stereoscopic technology with OpenGL not DX library and GeForce not Quadro chip?



I'm a member of developers' team, and we're working on application for home users, using stereoscope rendering. We need to render two frames, separately for left and right eye, and then display it in stereoscopic mode. Is there any possibility to do it like in case of DX? Render both images, create a special texture for both, add special header etc?







Greetings and thanks in advance,

Zegar

#1
Posted 09/16/2011 05:02 PM   
With 3d vision: Not at all.

Some games can be modded from opengl ->direct x.
Some games (like 5 years or older mainly) work with a much older version of 3d(pre 3d vision.) Though old schoolers rave about it, its not to hot.(You need a diff kit and an old video card)They dont render well if you ask me.
With 3d vision: Not at all.



Some games can be modded from opengl ->direct x.

Some games (like 5 years or older mainly) work with a much older version of 3d(pre 3d vision.) Though old schoolers rave about it, its not to hot.(You need a diff kit and an old video card)They dont render well if you ask me.

#2
Posted 09/16/2011 06:04 PM   
It's maybe possible. Have you checked the nvidia api? http://developer.nvidia.com/nvapi
It's maybe possible. Have you checked the nvidia api? http://developer.nvidia.com/nvapi

Image

Mb: Asus P5W DH Deluxe

Cpu: C2D E6600

Gb: Nvidia 7900GT + 8800GTX

3D:100" passive projector polarized setup + 22" IZ3D

Stereodrivers: Iz3d & Tridef ignition and nvidia old school.

#3
Posted 09/16/2011 06:18 PM   
There is no way in an OpenGL app to do what you want without a quadro card.
It's one of the ways NVidia chooses to differentiate it's professional offerings.

If you're targeting home users, you'll have to use D3D.
There is no way in an OpenGL app to do what you want without a quadro card.

It's one of the ways NVidia chooses to differentiate it's professional offerings.



If you're targeting home users, you'll have to use D3D.
#4
Posted 09/16/2011 07:13 PM   
I've always been curious about whether or not it would be possible to render out the left and right images using OpenGL and use DirectX to display to the screen. You'd need to use the NVAPI to enable manual stereoscopy but I can't think of a reason why it wouldn't work. Unless it's just not possible to have both an OpenGL and DirectX device running at the same time.
I've always been curious about whether or not it would be possible to render out the left and right images using OpenGL and use DirectX to display to the screen. You'd need to use the NVAPI to enable manual stereoscopy but I can't think of a reason why it wouldn't work. Unless it's just not possible to have both an OpenGL and DirectX device running at the same time.

#5
Posted 09/16/2011 07:52 PM   
nvapi has no setting to enable "manual stereoscopy"
the only good nvapi is for is to query the 3d vision button state, and adjust convergence/separation if you're using 3d vision automatic
the reversestereoblit function is for STEREOSCOPIC VIDEO CAPTURE, like fraps does, NOT FOR QUADBUFFER purposes
nvapi has no setting to enable "manual stereoscopy"

the only good nvapi is for is to query the 3d vision button state, and adjust convergence/separation if you're using 3d vision automatic

the reversestereoblit function is for STEREOSCOPIC VIDEO CAPTURE, like fraps does, NOT FOR QUADBUFFER purposes

#6
Posted 09/16/2011 10:34 PM   
[quote name='rajkoderp' date='16 September 2011 - 11:34 PM' timestamp='1316212488' post='1294036']
nvapi has no setting to enable "manual stereoscopy"
the only good nvapi is for is to query the 3d vision button state, and adjust convergence/separation if you're using 3d vision automatic
the reversestereoblit function is for STEREOSCOPIC VIDEO CAPTURE, like fraps does, NOT FOR QUADBUFFER purposes
[/quote]

Okay, apparently it's called Active Stereoization, whatever. I've no idea how to do it, but I swear I've seen people pull it off with 3D Vision, blitting the two images into 1 double width texture using stretchrect and presenting with some flag enabled. I wouldn't be surprised if having both DirectX and OpenGL devices running is impossible (I've never had reason to try anything like that), but if it isn't, the conversion of the OpenGL textures to a DirectX texture and the blitting would be simple enough.

I dunno, I was just curious.

[quote name='rajkoderp' date='16 September 2011 - 11:34 PM' timestamp='1316212488' post='1294036']

nvapi has no setting to enable "manual stereoscopy"

the only good nvapi is for is to query the 3d vision button state, and adjust convergence/separation if you're using 3d vision automatic

the reversestereoblit function is for STEREOSCOPIC VIDEO CAPTURE, like fraps does, NOT FOR QUADBUFFER purposes





Okay, apparently it's called Active Stereoization, whatever. I've no idea how to do it, but I swear I've seen people pull it off with 3D Vision, blitting the two images into 1 double width texture using stretchrect and presenting with some flag enabled. I wouldn't be surprised if having both DirectX and OpenGL devices running is impossible (I've never had reason to try anything like that), but if it isn't, the conversion of the OpenGL textures to a DirectX texture and the blitting would be simple enough.



I dunno, I was just curious.


#7
Posted 09/17/2011 04:33 AM   
You could do what you're suggesting, but the performance impact would be significant.
You'd have to copy the final images via main ram.
You could do what you're suggesting, but the performance impact would be significant.

You'd have to copy the final images via main ram.
#8
Posted 09/17/2011 04:52 PM   
Didn't consider that, yeah it would have a pretty large impact, especially so since you wouldn't have any of the driver optimisations for 3D either.
Didn't consider that, yeah it would have a pretty large impact, especially so since you wouldn't have any of the driver optimisations for 3D either.

#9
Posted 09/17/2011 09:43 PM   
Scroll To Top