PhysX 通过 GPU 实现大规模性能?
我最近比较了一些用于模拟和游戏开发的物理引擎。 有些是免费的,有些是开源的,有些是商业的(1甚至是非常商业化的$$$$)。 Havok、Ode、Newton(又名 oxNewton)、Bullet、PhysX 和某些 3D 引擎中的“原始”内置物理。
在某个阶段我得出了结论或问题: 如果我可以利用 GPU 处理带来的惊人性能(如果我需要的话),为什么我应该使用 NVidia PhysX 以外的任何东西? 对于未来的 NVidia 卡,我可以期待独立于常规 CPU 生成步骤的进一步改进。 该 SDK 是免费的,并且也可用于 Linux。 当然,这有点供应商锁定,而且不是开源的。
您的看法或经验是什么? 如果您现在就开始开发,您同意上述说法吗?
干杯
I recently compared some of the physics engine out there for simulation and game development. Some are free, some are opensource, some are commercial (1 is even very commercial $$$$).
Havok, Ode, Newton (aka oxNewton), Bullet, PhysX and "raw" build-in physics in some 3D engines.
At some stage I came to conclusion or question:
Why should I use anything but NVidia PhysX if I can make use of its amazing performance (if I need it) due to GPU processing ? With future NVidia cards I can expect further improvement independent of the regular CPU generation steps. The SDK is free and it is available for Linux as well. Of course it is a bit of vendor lock-in and it is not opensource.
Whats your view or experience ? If you would start right now with development, would you agree with the above ?
cheers
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(7)
免责声明:我从未使用过 PhysX,我的专业经验仅限于 Bullet、Newton 和 ODE。 在这三个中,ODE 无疑是我最喜欢的。 它是数值上最稳定的,其他两个存在成熟度问题(未实现有用的关节,合法的关节/电机组合以未定义的方式运行,&c)。
您在问题中提到了供应商锁定问题,但值得重复一遍:如果您使用 PhysX 作为唯一的物理解决方案,使用 AMD 卡的人将无法运行您的游戏(是的,我知道它可以 可以工作,但它不是官方的,也不受 NVIDIA 支持)。 解决这个问题的一种方法是使用 ODE 或具有 AMD 卡的系统上的其他工具来定义故障转移引擎。 这可行,但会使你的工作量增加一倍。 认为您将能够将两个引擎之间的差异隐藏在一个通用接口后面并一次性编写大量游戏物理代码是很诱人的,但您在游戏物理方面的大部分困难将在于处理您特定的特性物理引擎,决定接触摩擦和恢复等值。 这些值在物理引擎中没有一致的含义,并且(大多数)无法正式导出,因此您只能通过实验来寻找美观、可玩的值。 借助 PhysX 和故障转移功能,您可以将所有繁琐的工作完成两次。
在更高的层面上,我认为任何流处理 API 都还没有完全成熟,并且至少在我们了解客户对英特尔 Larrabee 塑造人们的反应之前,我不愿意做出任何承诺。设计。
到目前为止,我并没有将 PhysX 视为高端游戏开发的明显选择,我认为应该避免使用它,除非您认为拥有 AMD 卡的人在您的玩家群中不占很大一部分(极不可能),或者您有足够的编码和质量保证人力来测试两个物理引擎(更合理的是,如果你的公司那么富有,我听说过关于 Havok 的好消息)。 或者,我猜,如果你设计的物理游戏的性能要求非常高,只有流物理可以满足你 - 但在这种情况下,我建议你组建一个乐队,让摩尔定律发挥作用一年或两个。
Disclaimer: I've never used PhysX, my professional experience is restricted to Bullet, Newton, and ODE. Of those three, ODE is far and away my favorite; it's the most numerically stable and the other two have maturity issues (useful joints not implemented, legal joint/motor combinations behaving in undefined ways, &c).
You alluded to the vendor lock-in issue in your question, but it's worth repeating: if you use PhysX as your sole physics solution, people using AMD cards will not be able to run your game (yes, I know it can be made to work, but it's not official or supported by NVIDIA). One way around this is to define a failover engine, using ODE or something on systems with AMD cards. This works, but it doubles your workload. It's seductive to think that you'll be able to hide the differences between the two engines behind a common interface and write the bulk of your game physics code once but most of your difficulties with game physics will be in dealing with the idiosyncrasies of your particular physics engine, deciding on values for things like contact friction and restitution. Those values don't have consistent meanings across physics engines and (mostly) can't be formally derived, so you're stuck finding good-looking, playable values by experiment. With PhysX plus a failover you're doing all that scut work twice.
At a higher level, I don't think any of the stream processing APIs are fully baked yet, and I'd be reluctant to commit to one until, at the very least, we've how the customer reaction Intel's Larrabee shapes peoples' designs.
So far from seeing PhysX as the obvious choice for high-end game development, I'd say it should be avoided unless either you don't think people with AMD cards make up a significant fraction of your player base (highly unlikely) or you have enough coding and QA manpower to test two physics engines (more plausible, though if your company is that wealthy I've heard good things about Havok). Or, I guess, if you've designed a physics game with performance demands so intense that only streaming physics can satisfy you - but in that case, I'd advise you to start a band and let Moore's Law do its thing for a year or two.
2013 年初的更新答案:我为我认为的三大操作系统进行开发:Linux、OS X、MS。 我还使用三大物理库进行开发:PhysX、Havok、Bullet。
关于 PhysX,我最近做了一些测试,截至撰写本文时最新版本为 3.2.2。 在我看来,nVidia 确实降低了库的有效性。 最大的问题是刚体缺乏加速度。 该库仅加速粒子和布料。 即使这些也不与一般刚体交互。 我对 nVidia 这样做完全感到困惑,因为他们有巨大的营销动力来推动 GPU 加速应用程序,专注于科学计算,其中很大的驱动力是物理模拟。
因此,虽然我对物理模拟之王的期望是 PhysX、Havok 和 Bullet,但现实却恰恰相反。 Bullet 发布了 lib 2.8.1,其中包含 OpenCL 支持的示例。 Bullet 是一个相对较小的库,拥有慷慨的许可。 他们的目标是发布具有完全集成 OpenCL 刚体加速功能的版本 3。
部分评论讨论了多个代码路径。 我的意见是这没什么大不了的。 我已经以最少的硬代码支持支持三种操作系统(大部分是线程并且不使用特定于操作系统的代码;使用 C++ 和 std lib 模板)。 物理库也是类似的。 我使用共享库并抽象出一个通用接口。 这很好,因为物理变化不大;)您仍然需要设置模拟环境,管理对象,在环境中渲染迭代,完成后进行清理。 其余的都是flash,在闲暇时实现。
随着 OpenCL 在主流库中的出现(nVidia Cuda 非常接近 - 请参阅 Bullet OpenCL 演示),物理插件工作将会缩减。
那么,从头开始,只关心物理建模? 使用 Bullet 绝对不会出错。 小型、灵活的许可证(免费),非常接近生产就绪的 OpenCL,它将跨平台跨越三大操作系统和 GPU 解决方案。
祝你好运 !
An early 2013 update answer: I develop for what I consider the big three OS: Linux, OS X, MS. I also develop with the big three physic libraries: PhysX, Havok, Bullet.
Concerning PhysX, I recently did some tests with the newest incarnation being 3.2.2 as of the time of this writing. In my opinion nVidia really reduced the effectiveness of the library. The biggest is lack of acceleration for rigid bodies. The lib only accelerates particles and cloth. Even those do not interface with general rigid bodies. I am completely puzzled by nVidia doing this since they have a huge marketing drive pushing GPU accelerated apps, focusing on scientific computation with a large driving force being physics simulation.
So while my expectations of the king of physics sim being PhysX, Havok, and Bullet in that order I see the reverse in reality. Bullet has released lib 2.8.1 with a sampling of OpenCL support. Bullet is a relatively small lib with generous licensing. Their goal is to have release 3 with fully integrated OpenCL rigid body acceleration.
Part of the comments talk about multiple code paths. My opinion is this is not too big a deal. I already support three OSes with minimal hard code support (threading for the most part and don't use OS specific code; use C++ and std lib templates). It is similar for the physics libraries. I use a shared library and abstract a common interface. This is fine because physics doesn't change much ;) You will still need to set up a simulation environment, manage objects, render iterations in the environment, clean up when finished. The rest is flash, implemented at leisure.
With the advent of OpenCL in mainstream libraries (nVidia Cuda is very close - see Bullet OpenCL demos) the physics plugin work will shrink.
So, starting from scratch and only concerned with physics modeling ? You can't go wrong with Bullet. Small, flexible license (free), very close to production ready OpenCL which will be cross platform across the big three OS and GPU solutions.
Good Luck !
您可能会发现这个有趣:
http://www.xbitlabs.com/news/ video/display/20091001171332_AMD_Nvidia_PhysX_Will_Be_Irrelevant.html
这是有偏见的......这基本上是对AMD的采访......但它提出了一些我认为在你的案例中值得考虑的观点。
由于大卫·塞勒指出的问题,在未来某个时候切换物理引擎可能是一个巨大/无法克服的问题......特别是如果游戏玩法与物理紧密结合的话。
因此,如果您现在确实希望在引擎中使用硬件加速物理功能,请选择 Physx,但请注意,当 AMD 在本文中提出的解决方案可用时(它们绝对会,但它们是还没有),您将面临令人不快的选择:
1)重写要使用的引擎(插入新的跨平台硬件加速物理引擎的名称),可能会以不好的方式改变游戏的动态
2)继续使用Physx只是,完全忽略 AMD 用户
3) 尝试让 Physx 在 AMD GPU 上工作(blech...)
除了 David 使用 CPU 物理引擎作为后备的想法(做两倍的工作并产生 2 个行为不同的引擎) )您唯一的其他选择是使用纯 CPU 物理。
然而,随着像 OpenCL 这样的东西成为主流,我们可能会看到 ODE/Bullet/kin 开始整合它……IOW,如果你现在用 ODE/Bullet/kin 编码,你可能(可能最终)“免费”获得 GPU 加速稍后(不更改您的代码)。 它与 GPU 版本的行为仍然略有不同(由于蝴蝶效应和浮点实现的差异,这是一个不可避免的问题),但至少 ODE/Bullet/kin 社区会与您合作来缩小这种差距。
这是我的建议:使用目前仅使用 CPU 的开源物理库,并等待它通过 OpenCL、CUDA、ATI 的流语言等来使用 GPU。当这种情况发生时,性能将会非常快,你会发现省得自己头疼。
You may find this interesting:
http://www.xbitlabs.com/news/video/display/20091001171332_AMD_Nvidia_PhysX_Will_Be_Irrelevant.html
It is biased ... it's basically an interview with AMD ... but it makes some points which I think are worth considering in your case.
Because of the issues David Seiler pointed out, switching physics engines some time in the future may be a huge/insurmountable problem... particularly if the gameplay is tightly bound to the physics.
So, if you really want hardware accelerated physics in your engine NOW, go for Physx, but be aware that when solutions such as those postulated by AMD in this article become available (they absolutely will but they're not here yet), you will be faced with unpleasant choices:
1) rewrite your engine to use (insert name of new cross-platform hardware accelerated physics engine), potentially changing the dynamics of your game in a Bad Way
2) continue using Physx only, entirely neglecting AMD users
3) try to get Physx to work on AMD GPUs (blech...)
Aside from David's idea of using a CPU physics engine as a fallback (doing twice the work and producing 2 engines which do not behave identically) your only other option is to use pure CPU physics.
However, as stuff like OpenCL becomes mainstream we may see ODE/Bullet/kin starting to incorporate that ... IOW if you code it now with ODE/Bullet/kin you might (probably will eventually) get the GPU acceleration for "free" later on (no changes to your code). It'll still behave slightly differently with the GPU version (an unavoidable problem because of the butterfly effect and differences in floating-point implementation), but at least you'll have the ODE/Bullet/kin community working with you to reduce that gap.
That's my recommendation: use an open source physics library which currently only uses the CPU, and wait for it to make use of GPUs via OpenCL, CUDA, ATI's stream language, etc. Performance will be screaming fast when that happens, and you'll save yourself headaches.
我曾经使用过ODE,现在使用PhysX。 PhysX 使构建场景变得更容易,并且(我个人的观点)看起来更现实,但是,PhysX 没有足够的文档; 事实上几乎没有任何文档。 另一方面,ODE是开源的,有大量的文档、教程等。
PS:使用 GPU 加速对我和我的同事帮助很大; 我们使用APEX破坏和PhysX粒子。
I have used ODE and now using PhysX. PhysX makes building scenes easier and (my personal opinion) seems more realistic, however, there is no adequate documentation for PhysX; in fact hardly any documentation at all. On the other hand, ODE is open source and there is plenty of documents, tutorials etc.
PS: Using GPU accelaration is helping me and my colleagues significantly; we are using APEX destruction and PhysX particles.
未来 gfx 卡的假设优势固然很好,但额外的 CPU 核心也将带来未来的优势。 您能确定未来的 gfx 卡将始终为您的物理提供备用容量吗?
但最好的原因可能是性能并不是一切,尽管在这种情况下有点模糊。 与任何第 3 方库一样,您可能需要在未来几年内支持和升级该代码,并且您需要确保接口合理、文档良好,并且它具有您需要的功能。要求。
可能还存在更多数学问题,例如某些 API 提供更稳定的方程求解等,但我会将对此发表评论给专家。
The hypothetical benefit of future gfx cards is all well and good, but there will also be future benefits from extra CPU cores too. Can you be sure that future gfx cards will always have spare capacity for your physics?
But probably the best reason, albeit a little vague in this case, is that performance isn't everything. As with any 3rd party library, you may need to support and upgrade that code for years to come, and you're going to want to make sure that the interfaces are reasonable, the documentation is good, and that it has the capabilities that you require.
There may also be more mathematical concerns such as some APIs offering more stable equation solving and the like, but I'll leave comment on that to an expert.
PhysX 可与非 nVidia 卡配合使用,只是无法加速。 将其保留在其他发动机启动时的相同位置。 问题是,如果您的物理模拟只能在硬件物理加速的情况下使用。
PhysX works with non-nVidia cards, it just doesn't get accelerated. Leaving it in the same position the other engines are to start with. The problem is if you have a physical simulation which is only workable with hardware physics acceleration.
如果您的所有代码都可以大规模并行化,那么就去做吧!
对于其他一切,GPU 都远远不够。
if all your code is massively paralelizable, then go for it!
for everything else, GPUs are woefully inadequate.