是否有一种快速替代方法可以从 XNA 中的位图对象创建 Texture2D?
我环顾四周,发现从位图创建 Texture2D 的唯一方法是:
using (MemoryStream s = new MemoryStream())
{
bmp.Save(s, System.Drawing.Imaging.ImageFormat.Png);
s.Seek(0, SeekOrigin.Begin);
Texture2D tx = Texture2D.FromFile(device, s);
}
其中
Texture2D tx = new Texture2D(device, bmp.Width, bmp.Height,
0, TextureUsage.None, SurfaceFormat.Color);
tx.SetData<byte>(rgbValues, 0, rgbValues.Length, SetDataOptions.NoOverwrite);
rgbValues 是包含 32 位 ARGB 格式的位图像素数据的字节数组。
我的问题是,我可以尝试更快的方法吗?
我正在编写一个地图编辑器,它必须读取自定义格式的图像(地图图块)并将其转换为要显示的Texture2D 纹理。编辑器的早期版本是 C++ 实现,它首先将图像转换为位图,然后转换为要使用 DirectX 绘制的纹理。我在这里尝试了相同的方法,但是上述两种方法都太慢了。在合理规格的计算机上,将地图所需的所有纹理加载到内存中,第一种方法大约需要 250 秒,第二种方法大约需要 110 秒(作为比较,C++ 代码大约需要 5 秒)。如果有一种方法可以直接编辑纹理数据(例如使用 Bitmap 类的 LockBits 方法),那么我将能够将自定义格式图像直接转换为 Texture2D,并有望节省处理时间。
任何帮助将非常感激。
谢谢
I've looked around a lot and the only methods I've found for creating a Texture2D from a Bitmap are:
using (MemoryStream s = new MemoryStream())
{
bmp.Save(s, System.Drawing.Imaging.ImageFormat.Png);
s.Seek(0, SeekOrigin.Begin);
Texture2D tx = Texture2D.FromFile(device, s);
}
and
Texture2D tx = new Texture2D(device, bmp.Width, bmp.Height,
0, TextureUsage.None, SurfaceFormat.Color);
tx.SetData<byte>(rgbValues, 0, rgbValues.Length, SetDataOptions.NoOverwrite);
Where rgbValues is a byte array containing the bitmap's pixel data in 32-bit ARGB format.
My question is, are there any faster approaches that I can try?
I am writing a map editor which has to read in custom-format images (map tiles) and convert them into Texture2D textures to display. The previous version of the editor, which was a C++ implementation, converted the images first into bitmaps and then into textures to be drawn using DirectX. I have attempted the same approach here, however both of the above approaches are significantly too slow. To load into memory all of the textures required for a map takes for the first approach ~250 seconds and for the second approach ~110 seconds on a reasonable spec computer (for comparison, C++ code took approximately 5 seconds). If there is a method to edit the data of a texture directly (such as with the Bitmap class's LockBits method) then I would be able to convert the custom-format images straight into a Texture2D and hopefully save processing time.
Any help would be very much appreciated.
Thanks
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
你想要 LockBits 吗?你会得到LockBits。
在我的实现中,我从调用者传入了 GraphicsDevice,这样我就可以使该方法通用且静态。
You want LockBits? You get LockBits.
In my implementation I passed in the GraphicsDevice from the caller so I could make this method generic and static.
他们在 XNA 4.0 中将格式从 bgra 更改为 rgba,因此该方法会给出奇怪的颜色,需要切换红色和蓝色通道。这是我写的一个方法,速度超级快! (加载 1500x 256x256 像素纹理大约需要 3 秒)。
they have changed the format from bgra to rgba in XNA 4.0, so that method gives strange colors, the red and blue channels needs to be switched. Here's a method i wrote that is super fast! (loads 1500x 256x256 pixel textures in about 3 seconds).
我发现在使用 LockBits 时,我必须将 PixelFormat 指定为 .Format32bppArgb,正如您建议的那样来抓取网络摄像头图像。
I found I had to specify the PixelFormat as .Format32bppArgb when using LockBits as you suggest to grab webcam images.
当我第一次读到这个问题时,我认为 SetData 性能才是极限。然而,阅读OP在最佳答案中的评论,他似乎分配了很多的大纹理2D。
作为替代方案,请考虑拥有一个 Texture2D 池,您可以根据需要分配该池,并在不再需要时返回该池。
第一次需要每个纹理文件时(或者在进程开始时的“预加载”中,具体取决于您想要延迟的位置),将每个文件加载到
byte[]
数组中。 (将这些byte[]
数组存储在 LRU 缓存 中 - 除非您确定有足够的内存来始终保留它们。)然后当您需要其中之一时这些纹理,抓取池纹理之一(如果没有合适的大小可用,则分配一个新的纹理),从字节数组中设置数据 - 中提琴,你有一个纹理。[我遗漏了重要的细节,例如需要将纹理与特定设备关联 - 但您可以确定从参数到您调用的方法的任何需求。我的观点是尽量减少对 Texture2D 构造函数的调用,尤其是当您有很多大型纹理时。]
如果您真的很喜欢,并且正在处理许多不同大小的纹理,您还可以应用LRU 缓存< /em> 原则到池。具体来说,跟踪池中保存的“空闲”对象的字节总数。如果该总数超过您设置的某个阈值(可能与“空闲”对象的总数相结合),则在下一个请求时,丢弃最旧的空闲池项目(大小错误或其他错误参数),以保持在允许的范围以下“浪费”缓存空间的阈值。
顺便说一句,您可以简单地跟踪阈值,并在超过阈值时丢弃所有自由对象。缺点是下次分配一堆新纹理时会出现短暂的停顿 - 如果您了解应保留的大小,则可以改善这种情况。如果这还不够好,那么您需要LRU。
When I first read this question, I assumed it was
SetData
performance that was the limit. However reading OP's comments in the top answer, he seems to be allocating a lot of large Texture2D's.As an alternative, consider having a pool of Texture2D's, that you allocate as needed, return to the pool when no longer needed.
The first time each texture file is needed (or in a "pre-load" at start of your process, depending on where you want the delay), load each file into a
byte[]
array. (Store thosebyte[]
arrays in an LRU Cache - unless you are sure you have enough memory to keep them all around all the time.) Then when you need one of those textures, grab one of the pool textures, (allocating a new one, if none of appropriate size is available), SetData from your byte array - viola, you have a texture.[I've left out important details, such as the need for a texture to be associated with a specific device - but you can determine any needs from the parameters to the methods you are calling. The point I am making is to minimize calls to the Texture2D constructor, especially if you have a lot of large textures.]
If you get really fancy, and are dealing with many different size textures, you can also apply LRU Cache principles to the pool. Specifically, track the total number of bytes of "free" objects held in your pool. If that total exceeds some threshold you set (maybe combined with the total number of "free" objects), then on next request, throw away oldest free pool items (of the wrong size, or other wrong parameters), to stay below your allowed threshold of "wasted" cache space.
BTW, you might do fine simply tracking the threshold, and throwing away all free objects when threshold is exceeded. The downside is a momentary hiccup the next time you allocate a bunch of new textures - which you can ameliorate if you have information about what sizes you should keep around. If that isn't good enough, then you need LRU.