移动相机以适应 3D 场景
我正在寻找一种算法来适应视口内的边界框(在我的例子中是 DirectX 场景)。我知道在正交相机中将边界球居中的算法,但对于边界框和透视相机也需要相同的算法。我不能只更改 FOV,因为此应用程序将 FOV 作为用户可编辑变量,因此它必须移动相机。
我拥有大部分数据:
- 我有相机的向上向量
- 我有边界框的中心点
- 我有从相机点到目标点的观察向量(方向和距离) 我已将这些点投影到垂直于相机的平面上,并检索描述最大/最小 X 和 Y 坐标在观看平面内或之外的
- 系数。
我遇到的问题:
- 边界框的中心不一定位于视口的中心(也就是说,它是投影后的边界矩形)。
- 由于视野“倾斜”投影(请参阅 http://en.wikipedia .org/wiki/File:Perspective-foreshortening.svg)我不能简单地使用系数作为比例因子来移动相机,因为它会超出/低于所需的相机位置
我如何找到相机位置以便它尽可能以像素完美的方式填充视口(例外情况是,如果长宽比远离 1.0,它只需要填充屏幕轴之一)?
我尝试了其他一些方法:
- 使用边界球和切线来找到移动相机的比例因子。这效果不好,因为它没有考虑透视投影,其次球体对我来说是不好的包围体,因为我有很多平坦和长的几何形状。
- 迭代调用该函数以获得越来越小的相机位置误差。这在一定程度上起到了作用,但有时我会遇到奇怪的边缘情况,其中相机位置超调太多并且误差因子增加。另外,在执行此操作时,我没有根据边界矩形的位置将模型重新居中。我找不到可靠、可靠的方法来做到这一点。
请帮忙!
I'm looking for an algorithm to fit a bounding box inside a viewport (in my case a DirectX scene). I know about algorithms for centering a bounding sphere in a orthographic camera but would need the same for a bounding box and a perspective camera. I can not just change the FOV because this app has FOV as a user editable variable, so it must move the camera.
I have most of the data:
- I have the up-vector for the camera
- I have the center point of the bounding box
- I have the look-at vector (direction and distance) from the camera point to the box center
- I have projected the points on a plane perpendicular to the camera and retrieved the coefficients describing how much the max/min X and Y coords are within or outside the viewing plane.
Problems I have:
- Center of the bounding box isn't necessarily in the center of the viewport (that is, it's bounding rectangle after projection).
- Since the field of view "skew" the projection (see http://en.wikipedia.org/wiki/File:Perspective-foreshortening.svg) I cannot simply use the coefficients as a scale factor to move the camera because it will overshoot/undershoot the desired camera position
How do I find the camera position so that it fills the viewport as pixel perfect as possible (exception being if the aspect ratio is far from 1.0, it only needs to fill one of the screen axis)?
I've tried some other things:
- Using a bounding sphere and Tangent to find a scale factor to move the camera. This doesn't work well, because, it doesn't take into account the perspective projection, and secondly spheres are bad bounding volumes for my use because I have a lot of flat and long geometries.
- Iterating calls to the function to get a smaller and smaller error in the camera position. This has worked somewhat, but I can sometimes run into weird edge cases where the camera position overshoots too much and the error factor increases. Also, when doing this I didn't recenter the model based on the position of the bounding rectangle. I couldn't find a solid, robust way to do that reliably.
Help please!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(7)
有许多可能的相机位置和方向,其中边界框适合视锥体。但任何程序都会选择一个特定的相机位置和方向。
如果您考虑边界球体,一种解决方案可能是
带有边界框的 平截头体内部 您可以考虑更早的步骤,首先将相机垂直于最大(或最小,无论您喜欢什么)立方体面的中心定位。
我没有使用 DirectX 的经验,但是移动和更改相机的观察方向以某个点为中心应该很容易。
困难的部分是计算确定要移动多远才能查看对象。
数学
如果您知道世界坐标中对象的边界大小(我们对像素或相机坐标不感兴趣,因为它们取决于您的距离)和相机方向,您可以计算如果您知道透视投影的 x 和 y 视野角
a
,则相机到边界形状所需的距离d
。因此,数学为
tan(a/2) = (s/2 ) / d
=>d = (s/2) / tan(a/2)
这将为您提供相机距最近边界表面应放置的距离。
There are many possible camera positions + orientations where the bounding box would fit inside the view frustum. But any procedure would select one specific camera position and orientation.
If you would consider bounding spheres, one solution could be to
With bounding boxes you could consider an earlier step of first positioning the camera at perpendicular to the center of the largest (or smallest, whatever you prefer) cube face.
I have no experience with DirectX, but moving and changing the looking direction of the camera to center a certain point should be easy.
The hard part is to do the math of deciding how far to move to view the object.
Math
If you know the bounding size
s
of the object in world coordinates (we are not interested in pixels or camera coordinates, since those are dependent on your distance) from the orientation of the camera, you can compute the required distanced
of the camera to the bounding shape if you know the x and y Field-Of-View anglea
of the perspective projection.So, the math is
tan(a/2) = (s/2) / d
=>d = (s/2) / tan(a/2)
Which will give you the distance the camera should be placed from the closest bounding surface.
我知道上面有一些很好的答案,但我想添加一个极其简单的解决方案来适应相机平截头体内的边界球体。它假设您希望保持相机目标和前向矢量相同,并简单地调整相机到目标的距离。
注意,这不会为您提供最佳拟合,但它会为您提供近似拟合,显示所有几何形状,并且只需几行代码,并且没有屏幕到世界的转换
BoundingBox 在 C# 中的实现如下所示。重要的一点是 Center 和 Corners 属性。 Vector3 是 3 分量 (X,Y,Z) 向量的相当标准的实现
I know there are some excellent answers above, but I wanted to add a rediculously simple solution to fit the bounding sphere inside the camera frustrum. It makes the assumption that you want to keep the camera Target and Forward vector the same, and simply adjust camera distance to target.
Note, this won't give you the best fit but it will give you an approximate fit, showing all geometry, and only in a few lines of code, and without screen to world transformations
The implementation of BoundingBox in C# is found below. The important points are the Centre and Corners properties. Vector3 is a pretty standard implementation of a 3 component (X,Y,Z) vector
既然你有一个边界框,你应该有一个描述它的方向的基础。似乎您想要将相机定位在与描述盒子最小尺寸的基本向量重合的线上,然后滚动相机以使最大尺寸处于水平位置(假设您有 OBB 而不是 AABB)。这里假设长宽比大于1.0;如果没有,您将需要使用垂直尺寸。
我会尝试:
boxWidth / (2 * tan(horizontalFov / 2))
。请注意,boxWidth
是框最大尺寸的宽度。boxCenter +scaledBasis
处,看着boxCenter
。编辑:
所以我认为你得到的是你的相机在任意位置看着某个地方,并且你在另一个位置有一个 AABB。在不将相机移动到面向盒子侧面的情况下,您需要:
如果是这种情况,您将拥有多做一点工作;这是我的建议:
Unproject
到世界空间中。对于 Z 值,请使用 AABB 到相机最近的世界空间点。Since you have a bounding box, you should have a basis describing it's orientation. It seems that you want to position the camera on the line coincident with the basis vector describing the smallest dimension of the box, then roll the camera so that the largest dimension is horizontal (assuming you have OBB and not AABB). This assumes that the aspect ratio is greater than 1.0; if not you'll want to use the vertical dimension.
What I would attempt:
boxWidth / (2 * tan(horizontalFov / 2))
. Note thatboxWidth
is the width of the largest dimension of the box.boxCenter + scaledBasis
looking at theboxCenter
.Edit:
So I think what you're getting at is that you have the camera at an arbitrary position looking somewhere, and you have an AABB at another position. Without moving the camera to face a side of the box, you want to:
If this is the case you'll have a bit more work; here's what I suggest:
Unproject
two opposing corners of the screen space bounding box into world space. For a Z value use the closest world space points of your AABB to the camera.我现在手头没有,但你想要的书是 http:// /www.amazon.com/Jim-Blinns-Corner-Graphics-Pipeline/dp/1558603875/ref=ntt_at_ep_dpi_1
他有一整章介绍这个
I don't have it at hand at the moment but the book you want is http://www.amazon.com/Jim-Blinns-Corner-Graphics-Pipeline/dp/1558603875/ref=ntt_at_ep_dpi_1
He has a whole chapter on this
这是直接从我的引擎复制的,它创建了 6 个平面,分别代表平截头体的六个面。
我希望它有用。
This is copied straight from my engine, it creates 6 planes which represent each of the six sides of the frutsum.
I hope it comes in useful.
如果其他人对更精确的解决方案感兴趣,我为 3ds Max 相机做了这个解决方案。适合相机视图上任意数量的物体。您可以看到 Maxscript 代码,因为伪代码很容易阅读,并且有一些有用的注释。
https://github.com/piXelicidio/pxMaxScript/tree/master/CameraZoomExtents
我为简化所做的工作是在相机空间上进行工作。获取对象顶点或边界框顶点并投影在两个 2D 平面上。
第一个就像从俯视图看你的相机(水平视野)
第二个是从侧视图(垂直FOV)
投影第一个平面上的所有顶点(顶视图)
现在从相机位置取两条线,代表相机视场,一条代表左侧,另一条代表右侧。我们只需要这条线的方向。
现在我们需要找到一个点(顶点),如果我们在它上面画右线,所有其他点都会落在左侧。 (找到图中的红点)
然后找到另一个点,如果左边的线越过它,则所有其他点都落在该线的右侧。 (蓝点)
有了这些点,然后我们截取穿过这两个点的两条线(我们仍然处于二维状态)。
所得到的截取是仅考虑水平视场的相机适合场景的最佳位置。
接下来对垂直 FOV 执行相同的操作。
这两个位置将为您提供决定是否需要从侧面贴合或从顶部和底部贴合所需的一切。
使相机从场景中移开更多平移的一个是获得“完美契合”的一个,另一个将有更多的空余空间,然后你需要找到中心......这也是在脚本上计算的上面的链接!
抱歉,现在无法继续解释需要睡觉;)如果有人感兴趣,请询问,我会尽力扩展答案。
If anyone else interested on a more precise solution, I did this one for 3ds Max cameras. To fit any amount of objects on the camera view. You can see the Maxscript code as seudo-code can be easy to read, and there are some helpful comments.
https://github.com/piXelicidio/pxMaxScript/tree/master/CameraZoomExtents
What I did for simplification is to work on camera space. Getting object vertices or Bounding box vertices and projecting on two 2D planes.
The first is like seeing your camera from top view (The Horizontal FOV)
The second is from side view (The Vertical FOV)
Projects all the vertices on the first plane (the top view)
Now take two lines coming from the camera position, representing the camera FOV, one for the Left side, and other for the Right side. We need only the direction of this line.
Now we need to find a point (vertex) than if we draw right line over it, all other points will fall at the left side. (Found the red dot on the figure)
Then find another ponit that if the left line goes over it, all other points fall on the right side of the line. (The blue dot)
With those to points then we intercept the two lines passing trough those two points (We are still in 2D).
The resulting interception is the best position of the camera to fit the scene taking only into account the horizontal FOV.
Next do the same for the vertical FOV.
These two position will give you all you need to decide if the fit needs to be from the sides or form top and bottom.
The one that gives the camera more translation moving away form the scene is the one to get the "perfect fit", the other one will have more empty room then you need to find the center... which is calculated also on the script on the link above!
Sorry can't keep explaining need to sleep now ;) If someone is interested, ask and I'll try to extend the answer.
检查此链接
https://msdn.microsoft.com/en-us/library/bb197900.aspx
浮动距离 = sphere.radius / sin(fov / 2);
float3 eyePoint = sphere.centerPoint - 距离 *camera.frontVector;
Check this link
https://msdn.microsoft.com/en-us/library/bb197900.aspx
float distance = sphere.radius / sin(fov / 2);
float3 eyePoint = sphere.centerPoint - distance * camera.frontVector;