一般动画问题
我对在图形环境中对事物进行动画处理的想法很陌生,因此我想澄清正确的方法是什么。
(只是为了设置场景,虽然它与问题不是特别相关:我在 iPhone 上使用 opengl es)
如果我去找一位艺术家并告诉他们为我创建一个行走的矮人的 3D 模型动画,该动画不会是动态的,他们将如何给我数据? 他们会: a) 创建 3D 骨骼模型,将路径列表中的骨骼路径以及时间戳和插值类型设置为动画,然后简单地定义每个骨骼的 3D 模型?即,行走的矮人将是脊柱、手、手臂、腿、脚、脖子、头部,然后建模者为每个骨骼创建零件并给我动画路径......?
或者 b) 建模者创建一个完整的模型,然后对其进行变形并以某种方式保存变形
!c) 我假设没有人会实际存储同一对象的 30 个模型,然后只呈现这些模型,除非它是一个非常低的多边形计数模型?还是我错了? 3D 动画的最佳对象格式是什么?
任何其他有关技术、机制等的建议/提示将不胜感激!
I am new to the idea of animating things in a graphics environment so I would like to clarify what the correct approach is.
(Just to set the scene although its not particularly relevant to the question: im working with opengl es on iphone)
If I go to an artist and tell them to create me a 3d model animation of a walking dwarf that wont be dynamic how will they give me the data?
Will they:
a) Create a 3d bones model, animate the bone paths in a path list together with timestamps and interpolation type and then simply define each bone's 3d model? I.e A walking dwarf would be a spine, hands, arms, legs, feet, neck, head and then the modeller creates parts for each of those bones and gives me the animation path...?
or
b) The modeller creates one full model and then deforms it and somehow saves the deformation
!c) i assume noone would actually store 30 models of the same object and then just present those unless it was a very low poly polycount model? Or am I wrong?
What is the best object format for 3d animations?
Any other advice/tips on techniques,mechanisms etc will be greatly appreciated!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
你的想法基本上是正确的。有两种主要方法,骨架和非骨架,这两种方法都往往涉及提供关键帧。
对于非骨骼动画,您可能会获得十个在行走时绘制的动画帧以及从一帧进展到下一帧所需的时间。所以它是 2D 像素精灵过去工作方式的精确 3D 模拟。您可以计算出当前可见的帧或应用补间。如果您知道自己位于顶点位于 V1 的帧和同一顶点位于 V2 的帧之间,则可以将其放置在 V1 和 V2 之间的中间。因此,您可以在帧之间线性插值所有顶点位置。这看起来比仅仅浏览帧要平滑一些,但确实会稍微扭曲几何形状,因此您仍然需要帧相当密集。
在骨骼动画中,运动由骨骼来描述,骨骼是一系列相连的骨骼。每个关键帧都是骨骼的特定方向。通常这是一个分层的事情,因此要描述手臂,您可以首先给出上臂相对于肩膀的方向,然后给出下臂相对于上臂的方向,手相对于下臂的方向,每个手指相对于肩膀的方向。手等。这样做的优点是您可以执行非常好的补间而不失真。中间帧是旋转的一半,沿着骨骼树传播。如果您坚持使用四元数来描述方向,那么按照“一半旋转”进行插值相对容易,并且效果良好。
为了将实际的几何体放在骨骼上,每个顶点都与多个骨骼之一相关联。您为其赋予每个骨骼的加权附着力,例如,下臂上的顶点可能 100% 附着到下臂骨骼,朝向肘部的顶点可能 80% 附着到下臂骨骼,20% 附着到上臂骨骼。您可以使用每个相关骨骼将顶点变换到的位置的加权和来获取实际的顶点位置。通过这种方式,您可以获得非常好的关节(尽管通常使用比我的简化解释更复杂的骨架)。
就 iPhone 而言,在 ES 1.x 下,您很可能必须在 CPU 上进行非骨架补间,这并不像您想象的那样是一个性能问题,因为 PowerVR MBX 实际上并不保留顶点无论如何,在视频 RAM 中缓冲对象。只要您以 PowerVR 友好的格式累积缓冲区(对齐很重要,大多数情况下,按规定顺序交错位置/纹理坐标/法线/等也是有益的),那么提交给 OpenGL 的成本也不会贵多少比使用顶点缓冲区对象。
Apple 支持骨骼样式动画的 GL_OES_matrix_palette 扩展。对于每组顶点,您可以提供多个模型视图矩阵,对于每个顶点,您可以设置每个输入矩阵的权重。矩阵数量存在一些实现限制,这可能会阻止您将整个模型作为一个集合进行处理,但您可以根据需要进行细分。好处是您可以将所有顶点数据放入顶点缓冲区对象中,并将驱动程序和 GPU 交给它。
在支持 ES 2.x 的设备上,您可以使用顶点着色器更好地完成非骨骼补间工作。这将允许您使用顶点缓冲区对象并计算出 GPU 上的位置。由于 ES 2.x 硬件支持推送顶点缓冲区对象以实现完整的 GPU 管理,因此这是一个巨大的胜利。
通过 GL_OES_matrix_palette 使用 ES 1.x 管道进行骨架补间可能与使用可编程管道一样有效,因为您已经能够使用顶点缓冲区对象。
You have basically the right ideas. There's two main approaches, skeletal and non-skeletal, both of which tend to involve supplying keyframes.
With non-skeletal animation, you might be supplied with, say, ten frames of animation to draw while walking and the amount of time it takes to progress from one frame to the next. So it's the exact 3d analogue of the way 2d pixel sprites used to work. You can either work out which frame is currently visible or apply tweening. If you know that you're halfway between a frame where a vertex is at V1 and a frame where the same vertex is at V2, you can position it halfway between V1 and V2. So you're linearly interpolating all vertex positions between frames. This looks a little smoother than just flicking through frames, but does tend to distort geometry a little so you still need the frames to be reasonably dense.
With skeletal animation, the motion is described by the skeleton, which is a series of connected bones. Each keyframes is a particular orientation of the bones. Often this is a hierarchical thing, so to describe the arm you could start by giving the orientation of the upper arm relative to the shoulder, then the lower arm relative to the upper arm, the hand relative to the lower arm, each finger relative to the hand, etc. The advantage of this is that you can perform really good tweening without distortion. The halfway frame is half the rotation, propagated down the bone tree. And if you stick with quaternions for describing orientation then it's relatively easy to interpolate in terms of 'half the rotation' with good results.
To put actual geometry over the bones, each vertex is associated with one of more bones. You give it a weighted attachment to each bone, e.g. vertices on the lower arm might be 100% attached to the lower arm bone, vertices towards the elbow might be 80% attached to the lower arm bone, 20% to the upper. You can use a weighted sum of where the vertex would be transformed to by each relevant bone to get the actual vertex position. In that way you can get pretty good joints (albeit, usually using a more complicated skeleton than my simplified explanation).
In iPhone terms, under ES 1.x you're very likely to have to do non-skeletal tweening on the CPU, which isn't as much of a performance problem as you might guess because the PowerVR MBX doesn't actually keep vertex buffer objects in video RAM anyway. As long as you're accumulating your buffer in a PowerVR-friendly format (alignment matters, mostly, interleaving of position/texture coordinates/normals/etc in the prescribed order is also beneficial) then the submission to OpenGL isn't much more expensive than using a vertex buffer object.
Apple support the GL_OES_matrix_palette extension for skeletal-style animation. For each group of vertices you can supply several modelview matrices and for each vertex you can set the weighting of each input matrix. There are some implementation limits on the number of matrices that will likely prevent you from doing an entire model as a single set, but you can subdivide as necessary. The benefit is that you can put all your vertex data into a vertex buffer object and leave the driver and GPU to it.
On devices that support ES 2.x, you can do a much better job of non-skeletal tweening with a vertex shader. That'll allow you to use a vertex buffer object and work out the positions on the GPU. Since the ES 2.x hardware supports pushing vertex buffer objects over for full GPU management, that's a big win.
Using the ES 1.x pipeline for skeletal tweening through GL_OES_matrix_palette is likely to work as well as using the programmable pipeline, since you're already able to use vertex buffer objects.