OpenGL 渲染与自己的 Phong 照明实现

发布于 2024-08-27 02:08:33 字数 4072 浏览 4 评论 0原文

我使用以 (0,0,0) 为中心并直接观察球体基元的相机实现了 Phong 照明方案。以下是场景文件的相关内容,用于使用 OpenGL 查看场景以及使用我自己的实现渲染场景:

ambient 0 1 0

dir_light  1 1 1       -3 -4 -5

# A red sphere with 0.5 green ambiance, centered at (0,0,0) with radius 1
material  0 0.5 0  1 0 0    1 0 0   0 0 0  0 0 0  10 1 0
sphere   0    0 0 0    1   

Here

OpenGL 生成的结果图像。

Here

我的渲染应用程序生成的图像。

正如您所看到的,两者之间存在各种差异:

  1. 我的图像上的镜面高光比 OpenGL 中的小。
  2. 漫反射表面似乎没有以正确的方式漫反射,导致图像中的黄色区域不必要地大,而在 OpenGL 中,有一个漂亮的深绿色区域更接近球体的底部。OpenGL
  3. 产生的颜色比我的形象中的那个。

这是我看到的最突出的三个差异。以下是我对 Phong 照明的实现:

R3Rgb Phong(R3Scene *scene, R3Ray *ray, R3Intersection *intersection)
{
  R3Rgb radiance;
  if(intersection->hit == 0)
  {
    radiance = scene->background;
    return radiance;
  }

  R3Vector normal = intersection->normal;
  R3Rgb Kd = intersection->node->material->kd;
  R3Rgb Ks = intersection->node->material->ks;

  // obtain ambient term
  R3Rgb intensity_ambient = intersection->node->material->ka*scene->ambient;

  // obtain emissive term
  R3Rgb intensity_emission = intersection->node->material->emission;

  // for each light in the scene, obtain calculate the diffuse and specular terms
  R3Rgb intensity_diffuse(0,0,0,1);
  R3Rgb intensity_specular(0,0,0,1);
  for(unsigned int i = 0; i < scene->lights.size(); i++)
  {
    R3Light *light = scene->Light(i);
    R3Rgb light_color = LightIntensity(scene->Light(i), intersection->position);
    R3Vector light_vector = -LightDirection(scene->Light(i), intersection->position);

    // calculate diffuse reflection
    intensity_diffuse += Kd*normal.Dot(light_vector)*light_color;

    // calculate specular reflection
    R3Vector reflection_vector = 2.*normal.Dot(light_vector)*normal-light_vector;
    reflection_vector.Normalize();
    R3Vector viewing_vector = ray->Start() - intersection->position;
    viewing_vector.Normalize();
    double n = intersection->node->material->shininess;
    intensity_specular += Ks*pow(max(0.,viewing_vector.Dot(reflection_vector)),n)*light_color;

  }

  radiance = intensity_emission+intensity_ambient+intensity_diffuse+intensity_specular;
  return radiance;
}

以下是相关的 LightIntensity(...) 和 LightDirection(...) 函数:

R3Vector LightDirection(R3Light *light, R3Point position)
{
  R3Vector light_direction;
  switch(light->type)
  {
    case R3_DIRECTIONAL_LIGHT:
      light_direction = light->direction;
      break;

    case R3_POINT_LIGHT:
      light_direction = position-light->position;
      break;

    case R3_SPOT_LIGHT:
      light_direction = position-light->position;
      break;
  }
  light_direction.Normalize();
  return light_direction;
}

R3Rgb LightIntensity(R3Light *light, R3Point position)
{
  R3Rgb light_intensity; 
  double distance;
  double denominator;
  if(light->type != R3_DIRECTIONAL_LIGHT)
  {
    distance = (position-light->position).Length();
    denominator = light->constant_attenuation + 
                         light->linear_attenuation*distance + 
                         light->quadratic_attenuation*distance*distance;
  }   

  switch(light->type)
  {
    case R3_DIRECTIONAL_LIGHT:
      light_intensity = light->color;
      break;

    case R3_POINT_LIGHT:
      light_intensity = light->color/denominator;
      break;

    case R3_SPOT_LIGHT:
      R3Vector from_light_to_point = position - light->position;
      light_intensity = light->color*(
                        pow(light->direction.Dot(from_light_to_point),
                            light->angle_attenuation));
      break;
  }
  return light_intensity;
}

对于任何明显的实现错误,我将不胜感激。我想知道这些差异是否仅仅因为 OpenGL 用于显示的伽玛值和我的显示器的默认伽玛值而发生。我还知道 OpenGL(或者至少是我提供的部分)无法在对象上投射阴影。这并不是说这与所讨论的问题相关,但它只是让我想知道这是否只是 OpenGL 和我想要做的事情之间的显示和功能差异。

感谢您的帮助。

I have implemented a Phong Illumination Scheme using a camera that's centered at (0,0,0) and looking directly at the sphere primitive. The following are the relevant contents of the scene file that is used to view the scene using OpenGL as well as to render the scene using my own implementation:

ambient 0 1 0

dir_light  1 1 1       -3 -4 -5

# A red sphere with 0.5 green ambiance, centered at (0,0,0) with radius 1
material  0 0.5 0  1 0 0    1 0 0   0 0 0  0 0 0  10 1 0
sphere   0    0 0 0    1   

Here

The resulting image produced by OpenGL.

Here

The image that my rendering application produces.

As you can see, there are various differences between the two:

  1. The specular highlight on my image is smaller than the one in OpenGL.
  2. The diffuse surface seems to not diffuse in the correct way, resulting in the yellow region to be unneccessarily large in my image, whereas in OpenGL there's a nice dark green region closer to the bottom of the sphere
  3. The color produced by OpenGL is much darker than the one in my image.

Those are the most prominent three differences that I see. The following is my implementation of the Phong illumination:

R3Rgb Phong(R3Scene *scene, R3Ray *ray, R3Intersection *intersection)
{
  R3Rgb radiance;
  if(intersection->hit == 0)
  {
    radiance = scene->background;
    return radiance;
  }

  R3Vector normal = intersection->normal;
  R3Rgb Kd = intersection->node->material->kd;
  R3Rgb Ks = intersection->node->material->ks;

  // obtain ambient term
  R3Rgb intensity_ambient = intersection->node->material->ka*scene->ambient;

  // obtain emissive term
  R3Rgb intensity_emission = intersection->node->material->emission;

  // for each light in the scene, obtain calculate the diffuse and specular terms
  R3Rgb intensity_diffuse(0,0,0,1);
  R3Rgb intensity_specular(0,0,0,1);
  for(unsigned int i = 0; i < scene->lights.size(); i++)
  {
    R3Light *light = scene->Light(i);
    R3Rgb light_color = LightIntensity(scene->Light(i), intersection->position);
    R3Vector light_vector = -LightDirection(scene->Light(i), intersection->position);

    // calculate diffuse reflection
    intensity_diffuse += Kd*normal.Dot(light_vector)*light_color;

    // calculate specular reflection
    R3Vector reflection_vector = 2.*normal.Dot(light_vector)*normal-light_vector;
    reflection_vector.Normalize();
    R3Vector viewing_vector = ray->Start() - intersection->position;
    viewing_vector.Normalize();
    double n = intersection->node->material->shininess;
    intensity_specular += Ks*pow(max(0.,viewing_vector.Dot(reflection_vector)),n)*light_color;

  }

  radiance = intensity_emission+intensity_ambient+intensity_diffuse+intensity_specular;
  return radiance;
}

Here are the related LightIntensity(...) and LightDirection(...) functions:

R3Vector LightDirection(R3Light *light, R3Point position)
{
  R3Vector light_direction;
  switch(light->type)
  {
    case R3_DIRECTIONAL_LIGHT:
      light_direction = light->direction;
      break;

    case R3_POINT_LIGHT:
      light_direction = position-light->position;
      break;

    case R3_SPOT_LIGHT:
      light_direction = position-light->position;
      break;
  }
  light_direction.Normalize();
  return light_direction;
}

R3Rgb LightIntensity(R3Light *light, R3Point position)
{
  R3Rgb light_intensity; 
  double distance;
  double denominator;
  if(light->type != R3_DIRECTIONAL_LIGHT)
  {
    distance = (position-light->position).Length();
    denominator = light->constant_attenuation + 
                         light->linear_attenuation*distance + 
                         light->quadratic_attenuation*distance*distance;
  }   

  switch(light->type)
  {
    case R3_DIRECTIONAL_LIGHT:
      light_intensity = light->color;
      break;

    case R3_POINT_LIGHT:
      light_intensity = light->color/denominator;
      break;

    case R3_SPOT_LIGHT:
      R3Vector from_light_to_point = position - light->position;
      light_intensity = light->color*(
                        pow(light->direction.Dot(from_light_to_point),
                            light->angle_attenuation));
      break;
  }
  return light_intensity;
}

I would greatly appreciate any suggestions as to any implementation errors that are apparent. I am wondering if the differences could be occurring simply because of the gamma values used for display by OpenGL and the default gamma value for my display. I also know that OpenGL (or at least tha parts that I was provided) can't cast shadows on objects. Not that this is relevant for the point in question, but it just leads me to wonder if it's simply display and capability differences between OpenGL and what I am trying to do.

Thank you for your help.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

是伱的 2024-09-03 02:08:33

作为第一步,我将检查您的相交表面法线是否已标准化,这在计算漫反射和镜面反射项点积时尤其重要。

出于调试目的,您可以一一检查照明项的输出(例如场景环境输出、光环境漫反射镜面输出、光衰减系数等),将方程中的其他项置零。一些简单的术语可能会产生相同的输出,您可以使用这种方法将搜索范围缩小到更少的代码行。它甚至可能与您的实现中的其他对象/方法相关。

另外,请记住,OpenGL 的 Phong 着色并不严格遵循 Phong 着色模型,因为法线是按顶点计算的,然后在三角形内插值,而不是按表面上的每个点计算。您的球体模型似乎已经足够细分,因此这不应该是一个实际问题。

据我所知,除非您使用 sRGB 色彩空间作为渲染目标,否则 OpenGL 不会执行伽玛校正。我希望正确的软件实现能够产生与硬件 OpenGL 实现非常相似的结果。调试愉快:)

As a first step, I would check whether your intersection surface normal is normalized, especially important when calculating diffuse and specular term dot products.

For debugging purposes, you can check the outputs of your lighting terms (such as scene ambient output, light ambient-diffuse-specular output, light attenuation factors, etc) one by one, 0'ing the other terms in the equations. Some simple terms are likely to produce identical output, and you can narrow your search down to fewer lines of code with this approach. It may even turn out to be related to other objects / methods in your implementation.

Also, please keep in mind that OpenGL's Phong shading does not follow the Phong shading model strictly, because the normals are calculated per vertex and then interpolated inside the triangles, they are not calculated per point on the surface. Your sphere model seems to be tessellated enough, so this should not be a practical problem.

OpenGL does not performs gamma correction unless you use sRGB color space as a render target, as far as I know. I would expect a correct software implementation to produce very similar results of a hardware OpenGL implementation. Happy debugging :)

十秒萌定你 2024-09-03 02:08:33

就我而言,我对伽玛值差异的最初猜测是正确的。调用渲染算法的主程序通过执行 image->TosRGB() 调用来校正图像的每个像素的 RGB 值,从而执行伽玛校正。注释掉调用后,我获得了OpenGL生成的图像。

In my case, my initial guess about the differences in gamma values was correct. The main program that called my rendering algorithm performed gamma correction by correcting each pixel's RGB value of my image by doing an image->TosRGB() call. After commenting out the call, I obtained the image produced by OpenGL.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文