是否有任何带有执行器的在线编译器可以编译使用 GPU 特定的 C/C++ 的应用程序?代码?
通常我需要一些在线编译器来编译和执行提供的程序并输出执行速度和其他统计数据。所有程序都可以位于一个 C 文件中,并且它将使用提供的任何 GPU C/C++ 库。我想至少编译C代码。有 GPU 供应商提供此类编译器吗?实际上我的问题是下一个 - 我的机器上有强大的 CPU 和较弱的 GPU。我需要测试一些特定于 GPU 的算法并获取有关执行情况的统计数据。我想以任何可能的方式测试我的程序,所以如果没有这样的在线 GPU 东西,也许有任何模拟器可以输出我在一些真实 GPU 上获得的时间和其他统计数据? (这意味着我会给它一个程序,它会在我的 CPU 上执行它,但会以某种方式计算时间,因为它正在运行一些 GPU)。
那么是否有可能如何在互联网云中某处的仿真软件上测试没有 GPU 卡维护的 GPU 特定程序呢?
Generally I need some online compiler that can compile and execute provided program and output execution speed and other statistics. All program can be in one C file and it would use any GPU C/C++ lib provided. I want to compile at least C code. Does any GPU vendor provide any such compiler? Actually my problem is next - I have powerful CPU and weak GPU on my machine. I need to test some algorithms that are specific to GPUs and get statistics on there execution. I would like to test my programs any way possible so If there Is no such online GPU thing maybe there is any emulator that can output time and other statistics that I would get on some real GPUs? (meaning I would give it a program it would be executing it on my CPU but count time somehow as it was some GPU running).
So is it possible any how to test GPU specific programs not having GPU card mening on emulation software of somewhere in internet cloud?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
Amazon EC2 最近添加了对“GPU 实例",这是普通 HPC 实例,配有两个 NVIDIA Tesla “Fermi” ” M2050 GPU。您可以通过 SSH 连接到这些实例,安装编译器,然后与它们一起进城。
费用为 2.10 美元/小时(如果您获得较长时间段的预留实例,则为 0.74 美元/小时)
Amazon EC2 recently added support for "GPU instances", which are normal HPC instances which come with two NVIDIA Tesla “Fermi” M2050 GPUs. You can SSH into these instances, install a compiler, and go to town with them.
It'll cost $2.10/hour (or $0.74/hour if you get a Reserved Instance for a longer block of time)
如果可以的话,我强烈考虑购买 GPU 卡。
任何给定 GPU 系列的低端产品通常都相当便宜,您可以从低端产品到高端产品进行一些合理的性能推断。
If it's an option at all, I'd strongly consider just getting the GPU card(s).
The low end of any given GPU family is usually pretty cheap, and you can make some reasonable performance extrapolations from that to the high end.
如果您从 nVidia 获得 CUDA 开发人员工具和 SDK,那么您可以在仿真模式下构建和运行 CUDA 程序,它们仅在主机 CPU 上运行,而不是在 GPU 上运行。在开始尝试让代码在实际 GPU 卡上运行之前,这是学习 GPU 编程基础知识的好方法。更新
显然模拟已在 CUDA 3.1 中删除 。
If you get the CUDA developer tools and SDK from nVidia then you can build and run CUDA programs in emulation mode, where they just run on the host CPU instead of on the GPU. This is a great way to learn GPU programming basics before you start trying to get code to run on an actual GPU card.UPDATE
Apparently emulation was removed in CUDA 3.1.