想法:如何使用基于 GPU 的直接体积渲染以交互方式渲染大型图像系列

发布于 2024-10-03 14:40:25 字数 525 浏览 0 评论 0原文

我正在寻找如何将 30+gb、2000+ 彩色 TIFF 图像系列转换为能够使用基于 GPU 的体积渲染(使用 OpenCL / OpenGL / GLSL)实时可视化(交互式帧速率)的数据集。我想使用直接体积可视化方法而不是表面拟合(即光线投射而不是行进立方体)。

问题有两个,首先我需要将图像转换为 3D 数据集。我首先想到的是将所有图像视为 2D 纹理,然后简单地堆叠它们以创建 3D 纹理。

第二个问题是交互帧速率。为此,我可能需要某种下采样与“按需详细信息”相结合,在缩放或其他操作时加载高分辨率数据集。

我发现的第一个逐点方法是:

  1. 通过逐层处理对完整体数据进行多边形化并生成相应的图像纹理;
  2. 通过顶点处理器操作执行所有必要的转换;
  3. 将多边形切片分割成更小的片段,记录相应的深度和纹理坐标;
  4. 在片段处理方面,采用顶点着色器编程技术来增强片段的渲染。

但我对如何开始实施这种方法没有具体的想法。

我很乐意看到一些关于如何开始实施上述方法的新想法或想法。

I'm looking for idea's how to convert a 30+gb, 2000+ colored TIFF image series into a dataset able to be visualized in realtime (interactive frame rates) using GPU-based volume rendering (using OpenCL / OpenGL / GLSL). I want to use a direct volume visualization approach instead of surface fitting (i.e. raycasting instead of marching cubes).

The problem is two-fold, first I need to convert my images into a 3D dataset. The first thing which came into my mind is to see all images as 2D textures and simply stack them to create a 3D texture.

The second problem is the interactive frame rates. For this I will probably need some sort of downsampling in combination with "details-on-demand" loading the high-res dataset when zooming or something.

A first point-wise approach i found is:

  1. polygonization of the complete volume data through layer-by-layer processing and generating corresponding image texture;
  2. carrying out all essential transformations through vertex processor operations;
  3. dividing polygonal slices into smaller fragments, where the corresponding depth and texture coordinates are recorded;
  4. in fragment processing, deploying the vertex shader programming technique to enhance the rendering of fragments.

But I have no concrete ideas of how to start implementing this approach.

I would love to see some fresh ideas or ideas on how to start implementing the approach shown above.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

俏︾媚 2024-10-10 14:40:25

如果有人在这个领域有任何新的想法,他们可能会尝试开发并发布它们。这是一个持续的研究领域。

在您的“逐点方法”中,您似乎已经概述了基于切片的体渲染的基本方法。这可以提供良好的结果,但许多人正在改用硬件光线投射方法。如果您有兴趣,CUDA SDK 中有一个这样的示例。

Crassin 等人详细介绍了一种分层体绘制的好方法。在他们名为 Gigavoxels 的论文中。它使用基于八叉树的方法,仅在需要时加载内存中所需的块。

该领域一本非常好的入门书籍是Real-Time Volume Graphics

If anyone has any fresh ideas in this area, they're probably going to be trying to develop and publish them. It's an ongoing area of research.

In your "point-wise approach", it seems like you have outlined the basic method of slice-based volume rendering. This can give good results, but many people are switching to a hardware raycasting method. There is an example of this in the CUDA SDK if you are interested.

A good method for hierarchical volume rendering was detailed by Crassin et al. in their paper called Gigavoxels. It uses an octree-based approach, which only loads bricks needed in memory when they are needed.

A very good introductory book in this area is Real-Time Volume Graphics.

原来是傀儡 2024-10-10 14:40:25

我已经完成了一些体积渲染,尽管我的代码使用行进立方体生成了一个等值面并显示了它。然而,在我对体积渲染的谦虚自学中,我确实遇到了一篇有趣的短论文:体积渲染常见计算机硬件。它也带有源示例。我从来没有抽出时间去检查一下,但它看起来很有希望。但它是 DirectX,而不是 OpenGL。也许它可以给你一些想法和一个起点。

I've done a bit of volume rendering, though my code generated an isosurface using marching cubes and displayed that. However, in my modest self-education of volume rendering I did come across an interesting short paper: Volume Rendering on Common Computer Hardware. It comes with source example too. I never got around to checking it out but it seemed promising. It is it DirectX though, not OpenGL. Maybe it can give you some ideas and a place to start.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文