取消投影和重新投影的精度损失

发布于 2024-12-01 15:10:15 字数 3046 浏览 2 评论 0原文

我正在实现阴影贴图技术的一种变体,但我(认为)正遭受精度损失。

这就是我所做的:

  • 我从眼睛位置绘制场景以填充深度缓冲区
  • 我使用 gluUnproject 取消投影这些点
  • 我使用 gluProject 将这些点从我的光源重新投影为眼点
  • 然后循环遍历所有三角形,从我的光源将这些点投影为眼点

- >对于与三角形重叠的点(来自第一步),我比较深度。我将三角形像素处的插值深度与我在步骤 2 中重新投影的深度进行比较,如果三角形更接近,则它会映射在阴影中。

我使用重心坐标来插值不规则位置的深度。这意味着将三个浮点值与零进行比较,比较两个浮点以查看哪个较小,..我对所有比较使用了偏差,没有任何大影响(eps = 0.00001)

该算法运行良好,但我仍然有一些工件和我认为这些都可以归因于un-and reprojecting。这可以吗?

我使用的是透视投影,我的近 = 1.0,我的远 = 20.0。 我可以做些什么来改善这一点?

我很乐意展示一些代码,但数量相当多。那么让我们看看首先出现哪些建议。

神器 http://img849.imageshack.us/img849/4420/artifactk.png

我以这种方式读取像素并取消投影:

//Get the original pixels
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[0]);
glReadPixels( 0, 0,800, 300, GL_DEPTH_COMPONENT,GL_FLOAT, BUFFER_OFFSET(0));

glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[1]);
glReadPixels( 0, 300, 800, 300, GL_DEPTH_COMPONENT,GL_FLOAT, BUFFER_OFFSET(0));

//Process the first batch of pixels
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[0]);   
GLfloat *pixels1 = (GLfloat*)glMapBufferARB(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
processPixels( pixels1, lightPoints, modelview, projection, viewport, 0);

//Process the second batch of pixels
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[1]);
GLfloat *pixels2 = (GLfloat*)glMapBufferARB(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
processPixels( pixels2, lightPoints, modelview, projection, viewport, 1);

//Unamp buffers and restore default buffer
glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, pboIds[0]);
glUnmapBuffer(GL_PIXEL_PACK_BUFFER);

glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, pboIds[1]);
glUnmapBufferARB(GL_PIXEL_PACK_BUFFER);

glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, 0);

//Projecting the original points to lightspace
glLoadIdentity();   
gluLookAt( light_position[0], light_position[1], light_position[2], 0.0,0.0,0.0,0.0,1.0,0.0);

//We get the new modelview matrix - Lightspace
glGetDoublev( GL_MODELVIEW_MATRIX, modelview );

//std::cout <<"Reprojecting" << std::endl;
GLdouble winX, winY, winZ;
Vertex temp;
//Projecting the points into lightspace and saving the sample points
for(vector<Vertex>::iterator vertex = lightPoints.begin(); vertex != lightPoints.end(); ++vertex){
    gluProject( vertex->x, vertex->y, vertex->z,modelview, projection, viewport, &winX, &winY, &winZ );
    temp.x = winX;
    temp.y = winY;
    temp.z = winZ;
//  std::cout << winX << " " << winY << " " << winZ << std::endl;
    samplePoints.push_back(temp);
}

我的深度缓冲区是 24 位,我无法更改 afaik(ATI Radeon HD4570 并且我正在使用 GLUT)。

我比较了我的深度:

if(rasterizer.interpolateDepth(A, B, C, baryc) < sample->z - 0.00001*sample->z){
                stencilBits[(((int)sample->y*800 +(int)sample->x )) ] = 1;

两者都是浮点数。

顺便说一句,浮动应该足够精确,在论文中我的基础是他们也使用浮动。 }

I am implementing a variant of the shadow mapping technique but I (think) am suffering from precision loss.

Here is what I do:

  • I draw my scene from eyeposition to fill the depth buffer
  • I unproject these points using gluUnproject
  • I reproject these points from my lightsource as eyepoint using gluProject
  • I then loop over all my triangles, project these from my lightsource as eyepoint

-> For points (from the first step) that overlap with a triangle I compare depth. I compare the interpolated depth at pixel of the triangle with the depth I reprojected in step 2, if the triangle is closer it is mapped in shadow.

I use barycentric coordinates to interpolate depth at an irregular location. This means comparing three float values to zero, compare two floats to see which one is smaller, .. I used a bias on all the comparisons without any big effects (eps = 0.00001)

The algorithm is working nicely but I still have some artifacts and I think these can be attributed to the un - and reprojecting. Can this be?

I am using a perspective projection, my near = 1.0 and my far = 20.0.
What can I do to improve this?

I'd be happy to show some code but it's quite a lot. So let's see what suggestions come out first.

Artifact http://img849.imageshack.us/img849/4420/artifactk.png

I read my pixels and unproject in this way:

//Get the original pixels
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[0]);
glReadPixels( 0, 0,800, 300, GL_DEPTH_COMPONENT,GL_FLOAT, BUFFER_OFFSET(0));

glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[1]);
glReadPixels( 0, 300, 800, 300, GL_DEPTH_COMPONENT,GL_FLOAT, BUFFER_OFFSET(0));

//Process the first batch of pixels
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[0]);   
GLfloat *pixels1 = (GLfloat*)glMapBufferARB(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
processPixels( pixels1, lightPoints, modelview, projection, viewport, 0);

//Process the second batch of pixels
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[1]);
GLfloat *pixels2 = (GLfloat*)glMapBufferARB(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
processPixels( pixels2, lightPoints, modelview, projection, viewport, 1);

//Unamp buffers and restore default buffer
glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, pboIds[0]);
glUnmapBuffer(GL_PIXEL_PACK_BUFFER);

glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, pboIds[1]);
glUnmapBufferARB(GL_PIXEL_PACK_BUFFER);

glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, 0);

//Projecting the original points to lightspace
glLoadIdentity();   
gluLookAt( light_position[0], light_position[1], light_position[2], 0.0,0.0,0.0,0.0,1.0,0.0);

//We get the new modelview matrix - Lightspace
glGetDoublev( GL_MODELVIEW_MATRIX, modelview );

//std::cout <<"Reprojecting" << std::endl;
GLdouble winX, winY, winZ;
Vertex temp;
//Projecting the points into lightspace and saving the sample points
for(vector<Vertex>::iterator vertex = lightPoints.begin(); vertex != lightPoints.end(); ++vertex){
    gluProject( vertex->x, vertex->y, vertex->z,modelview, projection, viewport, &winX, &winY, &winZ );
    temp.x = winX;
    temp.y = winY;
    temp.z = winZ;
//  std::cout << winX << " " << winY << " " << winZ << std::endl;
    samplePoints.push_back(temp);
}

My depth buffer is 24 bits, which I can't change afaik (ATI Radeon HD4570 and I am using GLUT).

I compare my depth:

if(rasterizer.interpolateDepth(A, B, C, baryc) < sample->z - 0.00001*sample->z){
                stencilBits[(((int)sample->y*800 +(int)sample->x )) ] = 1;

both are floats.

Floats should be enouhg precision btw, in the paper I am basing myself on they use floats aswell.
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

葬花如无物 2024-12-08 15:10:15

Try putting the far plane way further away: http://www.codermind.com/files/small_wnear.gif

↘紸啶 2024-12-08 15:10:15

几个建议:
- 首先在没有CPU的情况下实现常规阴影映射
- 非常非常仔细地阅读 opengl pipeline 数学并确保一切正确,包括四舍五入
- 你说插值深度。这听起来很错误 - 你只是不能按原样线性插值深度(你可以深度平方,但我不认为你正在这样做)

Couple of suggestions:
- implement regular shadow mapping without cpu things first
- very, very carefully read the opengl pipeline math and make sure you get everything right, including rounding
- you say interpolating depth. this sounds very wrong - you just can not linear interpolate depth as is (you can depth squared but i don't think you are doing that)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文