有没有任何工具可以将 PVR 纹理分割成一组图块?

发布于 2024-11-08 01:03:30 字数 204 浏览 0 评论 0原文

我有一张大的 png 纹理 4096x4096,我需要将其部分加载到内存中。我已经将大 png 纹理分割成 16 个 1024x1024 块,然后将它们转换为 PVR 压缩文件。

问题是,当我绘制这些图块时,图块之间的边缘与 png 的边缘不同。所以,我想是否有一个工具可以生成一个 4096x4096 PVR 纹理,然后将其分割成 16 个 1024x1024 PVR 图块?

I 've one big png texture 4096x4096 that I need to load parts of in the memory. I already split the big png texture into 16 1024x1024 tiles then converted them to PVR compressed files.

The problem is that when I draw these tiles, the edges between tiles are not the same as the png does. So, I think if there is a tool to generate one 4096x4096 PVR texture then splitting it into 16 1024x1024 PVR tiles??

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

审判长 2024-11-15 01:03:30

我认为 PVR(这是一种更通用的纹理文件格式,支持多种纹理类型)是指 PVRTC?

PVRTC 不是传统意义上的基于块的,例如,在 ETC 或 S3TC 中,纹理被分为 4x4 像素块,并且每个块都单独压缩。相反,它尝试在重叠的像素邻域组之间共享数据。它还(在某种程度上)假设纹理可能平铺,因此,例如,最左边缘区域实际上与最右手共享信息(顶部和底部也类似)。这通常不是什么太大的问题,除非边缘完全不同

因此,如果您尝试将已经压缩的纹理细分为较小的区域,则它将无法工作,因为压缩器已经假设了大图像中共享的内容,而大图像与小图像中共享的内容不同。

至于单独压缩每个部分,听起来每个单独部分的边缘可能会有很大不同。我唯一能想到的就是将原始纹理切成 (2^N -4)x (2^N -4) 单位,但将它们集中存储在 2^N * 2^N 纹理中用原始像素的副本填充 2 像素边框。然后,您将纹理映射设置为仅使用中心 (2^N -4)x (2^N -4) 区域。希望这样可以减少不连续性伪影。

By PVR (which is a more general texture file format that supports several texture types) I assume you mean PVRTC?

PVRTC isn't block-based in the traditional sense where, say, with ETC or S3TC a texture is split into 4x4 pixel blocks and each block is compressed separately. Instead it attempts to share data between sets of overlapping neighbourhoods of pixels. It also (sort of) assumes that the texture probably tiles so, for example, the extreme left edge area actually shares information with the extreme right hand (and similarly for top and bottom). This is usually not too much of a problem unless the edges are completely different.

If you thus tried to subdivide an already compressed texture into smaller regions it's not going to work because the compressor has made assumptions about what was being shared in the big image, which won't be the same as the small one.

As for compressing each piece separately, it sounds like the edges of each separate piece might be quite different. The only thing I can think of is to chop your original texture into, say, (2^N -4)x (2^N -4) units but store them, centred, in 2^N * 2^N textures where you pad the 2 pixel border with a copy of the original pixels. You then set up your texture mapping to just use the centre (2^N -4)x (2^N -4) region. That, hopefully, should then reduce the discontinuity artefacts.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文