将 C# 方法转换为 C++ 方法
我正在探索将常见 C# 代码构造映射到 C++ CUDA 代码以在 GPU 上运行的各种选项。 系统结构如下(箭头代表方法调用):
C#程序-> C# GPU 库 -> C++ CUDA 实现 lib
GPU 库中的方法可能如下所示:
public static void Map<T>(this ICollection<T> c, Func<T,T> f)
{
//Call 'f' on each element of 'c'
}
这是 ICollection<> 的扩展方法 在每个元素上运行函数的类型。 然而,我希望它做的是调用 C++ 库并使其在 GPU 上运行方法。 这需要将该函数以某种方式转换为 C++ 代码。 这可能吗?
详细地说,如果我的库的用户执行一个方法(在 C# 中),其中包含一些任意代码,我想将此代码转换为 C++ 等效代码,以便我可以在 CUDA 上运行它。 我感觉没有简单的方法可以做到这一点,但我想知道是否有任何方法可以做到这一点或达到一些相同的效果。
我想知道的一件事是捕获要在表达式中翻译的函数,并使用它将其映射到 C++ 等效项。 有人有这方面的经验吗?
I'm exploring various options for mapping common C# code constructs to C++ CUDA code for running on a GPU. The structure of the system is as follows (arrows represent method calls):
C# program -> C# GPU lib -> C++ CUDA implementation lib
A method in the GPU library could look something like this:
public static void Map<T>(this ICollection<T> c, Func<T,T> f)
{
//Call 'f' on each element of 'c'
}
This is an extension method to ICollection<> types which runs a function on each element. However, what I would like it to do is to call the C++ library and make it run the methods on the GPU. This would require the function to be, somehow, translated into C++ code. Is this possible?
To elaborate, if the user of my library executes a method (in C#) with some arbitrary code in it, I would like to translate this code into the C++ equivelant such that I can run it on CUDA. I have the feeling that there are no easy way to do this but I would like to know if there are any way to do it or to achieve some of the same effect.
One thing I was wondering about is capturing the function to translate in an Expression and use this to map it to a C++ equivelant. Anyone has any experience with this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
如果您想要一些关于 C# 的参考,可以访问 CUDA.Net可以在GPU上运行。
There's CUDA.Net if you want some reference how C# can be run on GPU.
老实说,我不确定我是否完全理解你的意思。 但是,您可能对这个项目感兴趣,它将 .Net 应用程序/库转换为直接 C++,无需任何 .Net 框架。 http://www.codeplex.com/crossnet
To be honest, I'm not sure I fully understand what you are getting at. However, you may be interested in this project which converts .Net applications / libraries into straight C++ w/o any .Net framework required. http://www.codeplex.com/crossnet
我建议您按照以下流程在 C# 程序中使用 CUDA 来加速某些计算:
I would recommend the following process to accelerate some of your computation using CUDA from a C# program:
有趣的问题。 我不是 C# 专家,但我认为 ICollection 是对象的容器。 如果 c 的每个元素都是一个像素,那么您必须进行大量编组才能将其转换为 CUDA 可以使用的字节或浮点数缓冲区。 我怀疑这会减慢一切速度,从而抵消在 GPU 上执行任何操作的优势。
Interesting question. I'm not very expert at C#, but I think an ICollection is a container of objects. If each element of c was, say, a pixel, you'd have to do a lot of marshalling to convert that into a buffer of bytes or floats that CUDA could use. I suspect that would slow everything down enough to negate the advantage of doing anything on the gpu.
您可以做的是编写自己的
IQueryable
LINQ 提供程序,与 LINQ to SQL 的做法相同将 LINQ 查询转换为 SQL。然而,我发现这种方法存在一个问题,即 LINQ 查询通常是延迟计算的。 为了从管道中受益,这可能不是一个可行的解决方案。
也许还值得研究一下如何为 C# 和 CUDA 实现 Google 的 MapReduce API,然后使用类似于 PyCuda 的方法将逻辑传送到 GPU。 在这种情况下,查看已经存在的 CUDA 中的 MapReduce 实现。
What you could do would be to write an own
IQueryable
LINQ provider, as is done for LINQ to SQL to translate LINQ queries to SQL.However, one problem that I see with this approach is the fact that LINQ queries are usually evaluated lazily. In order to benefit from pipelining, this is probably not a viable solution.
It might also be worth investigating how to implement Google’s MapReduce API for C# and CUDA and then use an approach similar to PyCuda to ship the logic to the GPU. In that context, it might also be useful to take a look at the already existing MapReduce implementation in CUDA.
这是一个非常有趣的问题,我不知道该怎么做。
然而,Brahma 库似乎做了非常相似的事情。 您可以使用 LINQ 定义函数,然后将其编译为 GLSL 着色器以在 GPU 上高效运行。 看看他们的代码,特别是生命游戏示例。
That's a very interesting question and I have no idea how to do this.
However, the Brahma library seems to do something very similar. You can define functions using LINQ which are then compiled to GLSL shaders to run efficiently on a GPU. Have a look at their code and in particular the Game of Life sample.