如何使 opengl mipmap 更清晰?
我正在重写一个基于 opengl 的 gis/mapping 程序。除此之外,该程序还允许您加载海图的光栅图像,将其固定为经/纬度坐标,并在其上进行缩放和平移。
该程序的先前版本使用自定义平铺系统,本质上它以各种二次幂缩放级别的 256x256 像素平铺形式手动创建原始图像的 mipmap。缩放级别 n - 1 的图块是使用简单的四点平均算法从缩放级别 n 的四个图块构建的。因此,它会关闭 opengl mipmapping,而是当需要在某个缩放级别绘制图表的某些部分时,它会使用最接近匹配缩放级别的图块(即,图块处于二次方缩放状态)级别,但程序允许任意缩放级别),然后缩放图块以匹配实际缩放级别。当然,它必须管理各个级别的所有这些图块的缓存。
在我看来,这个平铺系统过于复杂。看起来我应该能够让图形硬件为我完成所有这些 mipmap 工作。因此,在新程序中,当我读入图像时,我会将其切成每个 1024x1024 像素的纹理。然后我将每个纹理固定为其经/纬度坐标,然后让 opengl 在缩放和平移时处理其余部分。
它有效,但问题是:我的结果比原始程序有点模糊,这对于此应用程序很重要,因为您希望能够尽早以缩放方式阅读图表上的文本。因此,就清晰度而言,原始程序使用的简单四点平均算法似乎比 opengl + 我的 GPU 提供了更好的结果。
我知道有几个 glTexParameter 设置可以控制 mipmap 工作方式的某些方面。我尝试了 GL_TEXTURE_MAX_LEVEL(从 0 到 10 的任何位置)与 GL_TEXTURE_MIN_FILTER 的各种设置的各种组合。当我将 GL_TEXTURE_MAX_LEVEL 设置为 0(无 mipmap)时,我当然会得到“清晰”的结果,但它们太清晰了,从某种意义上说,像素只是在这里和那里丢失,因此数字在以下位置不可读中间变焦。当我将 GL_TEXTURE_MAX_LEVEL 设置为较高值时,当您缩小得很远时(例如,当整个图表适合屏幕时),图像看起来相当不错,但是当您放大到中间缩放时,您会注意到模糊,尤其是在查看时图表上的文字。 (也就是说,如果没有文字,您可能会想“哇,opengl 在平滑缩放我的图像方面做得很好。”但是有了文字,您可能会想“为什么这个图表失焦了?”)
我的理解是基本上你告诉 opengl 生成 mipmap,然后当你放大时,它会选择要使用的适当的 mipmap,并且有一些有限的选项用于在两个最接近的 mipmap 级别之间进行插值,以及使用最接近的像素或平均附近的像素。然而,正如我所说,在图表上相同的缩放级别(即文本很小但不微小的缩放级别,例如相当于“7 点”或“ 8 点”大小),与之前基于图块的版本一样。
我的结论是,opengl 创建的 mipmap 比之前使用平均四点算法创建的 mipmap 更模糊,并且无论选择正确的 mipmap 或线性还是最近都无法获得我需要的清晰度。
具体问题:
(1) opengl 实际上制作的 mipmap 比原始程序中的平均四点算法更模糊,这看起来正确吗?
(2) 在使用 glTexParameter 时,是否有一些我可能忽略的东西,可以使用 opengl 正在制作的 mipmap 提供更清晰的结果?
(3) 有什么方法可以让 opengl 首先制作更清晰的 mipmap,例如使用“立方”过滤器或以其他方式控制 mipmap 创建过程?或者就此而言,我似乎可以使用相同的平均四点代码来手动生成 mipmap 并将它们交给 opengl。但我不知道该怎么做......
I am rewriting an opengl-based gis/mapping program. Among other things, the program allows you to load raster images of nautical charts, fix them to lon/lat coordinates and zoom and pan around on them.
The previous version of the program uses a custom tiling system, where in essence it manually creates mipmaps of the original image, in the form of 256x256-pixel tiles at various power-of-two zoom levels. A tile for zoom level n - 1 is constructed from four tiles from zoom level n, using a simple average-of-four-points algorithm. So, it turns off opengl mipmapping, and instead when it comes time to draw some part of the chart at some zoom level, it uses the tiles from the nearest-match zoom level (i.e., the tiles are in power-of-two zoom levels but the program allows arbitrary zoom levels) and then scales the tiles to match the actual zoom level. And of course it has to manage a cache of all these tiles at various levels.
It seemed to me that this tiling system was overly complex. It seemed like I should be able to let the graphics hardware do all of this mipmapping work for me. So in the new program, when I read in an image, I chop it into textures of 1024x1024 pixels each. Then I fix each texture to its lon/lat coordinates, and then I let opengl handle the rest as I zoom and pan around.
It works, but the problem is: My results are a bit blurrier than the original program, which matters for this application because you want to be able to read text on the charts as early as possible, zoom-wise. So it's seeming like the simple average-of-four-points algorithm the original program uses gives better results than opengl + my GPU, in terms of sharpness.
I know there are several glTexParameter settings to control some aspects of how mipmaps work. I've tried various combinations of GL_TEXTURE_MAX_LEVEL (anywhere from 0 to 10) with various settings for GL_TEXTURE_MIN_FILTER. When I set GL_TEXTURE_MAX_LEVEL to 0 (no mipmaps), I certainly get "sharp" results, but they are too sharp, in the sense that pixels just get dropped here and there, so the numbers are unreadable at intermediate zooms. When I set GL_TEXTURE_MAX_LEVEL to a higher value, the image looks quite good when you are zoomed far out (e.g., when the whole chart fits on the screen), but as you zoom in to intermediate zooms, you notice the blurriness especially when looking at text on the charts. (I.e., if it weren't for the text you might think "wow, opengl is doing a nice job of smoothly scaling my image." but with the text you think "why is this chart out of focus?")
My understanding is that basically you tell opengl to generate mipmaps, and then as you zoom in it picks the appropriate mipmaps to use, and there are some limited options for interpolating between the two closest mipmap levels, and either using the closest pixels or averaging the nearby pixels. However, as I say, none of these combinations seem to give quite as clear results, at the same zoom level on the chart (i.e., a zoom level where text is small but not minuscule, like the equivalent of "7 point" or "8 point" size), as the previous tile-based version.
My conclusion is that the mipmaps that opengl creates are simply blurrier than the ones the previous program created with the average-four-point algorithm, and no amount of choosing the right mipmap or LINEAR vs NEAREST is going to get the sharpness I need.
Specific questions:
(1) Does it seem right that opengl is in fact making blurrier mipmaps than the average-four-points algorithm from the original program?
(2) Is there something I might have overlooked in my use of glTexParameter that could give sharper results using the mipmaps opengl is making?
(3) Is there some way I can get opengl to make sharper mipmaps in the first place, such as by using a "cubic" filter or otherwise controlling the mipmap creation process? Or for that matter it seems like I could use the same average-four-points code to manually generate the mipmaps and hand them off to opengl. But I don't know how to do that...
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
(1) 看起来不太可能;我希望它只使用盒式过滤器,其平均效果为四点。可能它只是在不同时刻从一种纹理切换到更高分辨率的纹理 - 例如,它“选择与纹理化像素大小最匹配的 mipmap”,因此 256x256 贴图将用于对 383x383 区域进行纹理化,而它所取代的手动系统可能总是从 512x512 缩小,直到目标尺寸为 256x256 或更小。
(2) 不是我在基础 GL 中知道的,但如果您要切换到 GLSL 和可编程管道,那么如果问题是在以下情况下使用较低分辨率的贴图,您可以使用texture2D 的“偏差”参数:你不希望这样。同样, GL_EXT_texture_lod_bias 扩展可以在固定管道中执行相同的操作。它是十年前的 NVidia 扩展,是所有可编程卡都可以做的事情,所以您很可能会拥有它。
(编辑:更彻底地阅读扩展,纹理偏差已迁移到版本 1.4 中的 OpenGL 核心规范中;显然我的手册页已经过时了。检查 1.4 规范,第 279 页,您可以提供 GL_TEXTURE_LOD_BIAS)
(3) 是的 - 如果您禁用 GL_GENERATE_MIPMAP 那么您可以使用 glTexImage2D 来提供任何内容您喜欢的每个比例级别的图像,这就是“级别”参数所规定的。因此,如果需要,您可以提供完全不相关的 mip 贴图。
(1) it seems unlikely; I'd expect it just to use a box filter, which is average four points in effect. Possibly it's just switching from one texture to a higher resolution one at a different moment — e.g. it "Chooses the mipmap that most closely matches the size of the pixel being textured", so a 256x256 map will be used to texture a 383x383 area, whereas the manual system it replaces may always have scaled down from 512x512 until the target size was 256x256 or less.
(2) not that I'm aware of in base GL, but if you were to switch to GLSL and the programmable pipeline then you could use the 'bias' parameter to texture2D if the problem is that the lower resolution map is being used when you don't want it to be. Similarly, the GL_EXT_texture_lod_bias extension can do the same in the fixed pipeline. It's an NVidia extension from a decade ago and is something all programmable cards could do, so it's reasonably likely you'll have it.
(EDIT: reading the extension more thoroughly, texture bias migrated into the core spec of OpenGL in version 1.4; clearly my man pages are very out of date. Checking the 1.4 spec, page 279, you can supply a GL_TEXTURE_LOD_BIAS)
(3) yes — if you disable GL_GENERATE_MIPMAP then you can use glTexImage2D to supply whatever image you like for every level of scale, that being what the 'level' parameter dictates. So you can supply completely unrelated mip maps if you want.
为了回答你的具体观点,你提到的四点过滤相当于盒式过滤。这比高阶滤波器不太模糊,但可能会导致混叠模式。 Lanczos 过滤器是最好的过滤器之一。我建议您使用 Lanczos 过滤器从基础纹理计算所有 mipmap 级别,并调高显卡上的各向异性过滤设置。
我假设原始代码管理纹理本身,因为它设计用于查看太大而无法放入图形内存的数据集。这在过去可能是一个更大的问题,但仍然令人担忧。
To answer your specific points, the four-point filtering you mention is equivalent to box-filtering. This is less blurry than higher-order filters, but can result in aliasing patterns. One of the best filters is the Lanczos filter. I suggest you calculate all of your mipmap levels from the base texture using a Lanczos filter and crank up the anisotropic filtering settings on your graphics card.
I assume that the original code managed textures itself because it was designed to view data sets that are too large to fit into graphics memory. This was probably a bigger problem in the past, but is still a concern.