用于 3D 渲染/建模的 FPGA

发布于 2024-08-06 23:26:39 字数 357 浏览 6 评论 0原文

我是一位经验丰富的 C#/.NET 开发人员(实际上这一切都无关紧要,因为 FPGA 就像另一个复杂级别)。虽然我的能力水平不像 C# 那样专家,因为我有时仍然会查找一些东西(但不是经常查找,尽管我在一些语法/高级概念上遇到困难),但我的老板做 FPGA 并建议我参与其中(让自己放松,我令我惊讶的是,我并没有灰心丧气,因为我是一名初级开发人员,而且这是一项复杂的技术)。

因此我的问题是,学习 FPGA 的最佳方法是什么?我正在收集书籍等。

我正在研究可扩展的 3D 建模和渲染(理想情况是在用户等待即时响应的 Windows 应用程序中),CUDA 很流行,但据我的老板说速度没有那么快。

FPGA 是此类项目的最佳选择吗?

谢谢

I am an experienced C#/.NET developer (actually this is all irrelevant because FPGA is like another level of complexity). While my level of ability is not expert like in C# as I still sometimes look stuff up (but not very often, though I struggle with some syntax/advanced concepts), my boss does FPGA and recommends I get involved (easing myself in, I am surprised I am not being discouraged as I am a junior developer and it's a complex technology).

Thus my question is, what is the best way to learn FPGA? I am gathering books etc.

I am looking at scalable 3d modelling and rendering (ideally in a windows app where the user is waiting for an instant response) and CUDA is popular but not as fast according to my boss.

Is FPGA the way to go for this sort of project?

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

空城旧梦 2024-08-13 23:26:39

老实说,我认为你的老板错了。 NVIDIA 和 AMD 正在销售专为加速 3D 渲染而设计的真正硅硬件。除非您的具体问题无法映射到现有着色器/CUDA 范例,否则可配置硬件设备无法与之竞争。出于同样的原因,即使是最好的基于 FPGA 的 CPU(Xilinx 的 MicroBlaze、Altera 的 Nios)与低端嵌入式 ARM 内核相比也只是玩具。 (请注意,通常是有用的玩具,但除了具有未使用的 FPGA 门空间的设计外,没有竞争力。)

但我绝对推荐学习 FPGA 和 HDL 编程。在这一领域,“收集书籍”确实帮不上忙。你所要做的就是获得一个便宜的开发板(市场上有很多在 100-200 美元范围内的开发板),下载匹配的工具链并开始编写和测试代码。

Honestly I think your boss is wrong. NVIDIA and AMD are selling real silicon hardware purposed designed for accelerated 3D rendering. Unless your specific problem is one that doesn't map to existing shader/CUDA paradigms, there's no way a configurable hardware device is going to compete. This is for the same reason that even the best FPGA-based CPUs (Xilinx's MicroBlaze, Altera's Nios) are toys compared even to low-end embedded ARM cores. (Often useful toys, mind you, but not competetive except in designs with otherwise unused FPGA gate space.)

But I can definitely recommend learning FPGAs and HDL programming. This is one area where "gathering books" really isn't going to help you. What you have to do is get a cheap development board (there are many on the market in the $100-200US range), download the matching toolchain and start writing and testing code.

探春 2024-08-13 23:26:39

为什么不学习如何使用当今现代 PC 附带的硬件加速呢?我敢打赌,使用 OpenGL 或 DirectX(无论现在叫什么)和硬件加速会表现得更好。

我想如果您的应用程序要在某种自定义嵌入式设备上运行,也许您想创建自己的硬件,但对于 PC 应用程序来说,它可能太昂贵了,而且与已经完成疯狂工作的软件解决方案相比几乎没有任何好处调整性能。

我的观点:充分利用 3D 游戏技术的所有成果。

Why not learn how to use the hardware acceleration that comes with modern PCs today? I would bet that using OpenGL or DirectX(whatever it is called these days) with hardware acceleration will perform better.

I guess if your application is going to run on some kind of custom embedded device maybe you want to create your own hardware, but for PC apps, it is probably too expensive and has almost no benefit over a software solution that already has crazy work done to tune for performance.

My opinion: take advantage of all the work that has been put into 3d gaming technology.

一向肩并 2024-08-13 23:26:39

正如 Andy Ross 所说,我不认为 FPGA 是解决此类问题的最佳方式 - 您还需要以某种方式将其与 PC 连接起来。

我会首先获取一个 DevKit 并使用它。让 LED 闪烁 - 当我开始使用新的嵌入式设备时,我总是发现这是最难的部分 oO 获取某种形式的通信(RS232 / TCP),这可能位于 DevBoard 上。然后在其上实现一些数学函数,这些函数通过通信获取参数/将结果传回。

As Andy Ross says, I dout FPGA is the way you want to go for that type of problem - you will also need to interface it with the PC somehow.

I would start by getting a DevKit and play around with that. Make an LED blink - I've always found that to be the hardest part when I start with a new embedded device o.O. Get some form of comms going (RS232 / TCP) which is probably on the DevBoard. Then implement some math functions on it, which get parameters / pass results back via the comms.

叹梦 2024-08-13 23:26:39

好吧,FPGA 上的可扩展 3D 渲染。你会如何处理它? FPGA 非常适合将经典的 simd 架构扩展到您喜欢(或限制)的数据大小,具有出色的并行性,即使使用 100mhz,您也可以将数据处理到可接受的水平,在我看来,您唯一的限制是内存带宽和速度。不要忘记您需要一个图形控制器才能使用您吐出的数据。您实质上会制造所有硬件来完成如此复杂的任务,您确定有能力制造能够进行 3D 渲染的 SIMD 处理器吗?您的硬件设计是什么?

正如许多其他人指出的那样,ITT;来自 nvidia 的 CUDA 是一个很好的选择,新的 fermi 架构似乎很有前途,但如果您正在寻找低成本、小尺寸和低功耗,我不推荐使用 CUDA。当然,它对于解决任务很有帮助,但如果你的任务有轮子和电池,事情就会变得复杂。

我认为比图形更适合 FPGA 的任务是生物计算,这是一个比图形需要更高并行性的问题空间。

Well, scalable 3d rendering on an fpga. How would you approach it? FPGAs are great for scaling the classic simd architecture to the datasize of your liking (or limitation), with great parallelism you could process stuff to an acceptable level even with 100mhz, your only limitation in my opinion is memory bandwidth and speed. Dont forget you need a graphics controller to be able to use the data you spit out. You would in essence be making all the hardware to do such a complicated task, are you sure you are capable of making a SIMD processor capable of 3d rendering? What would your hardware design be?

As many others have pointed out ITT; CUDA from nvidia is a great alternative, the new fermi architecture seems promising, but if youre looking for low cost, low size and low power consumption i cant recommend using CUDA. Sure its great for solving the task, but if your task has wheels and a battery it gets complicated.

I would think a task more suited for fpgas than graphics is biological computation, a problem space in need of greater parallelism than graphics.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文