为什么 y 轴不能与 SuperBible 框架参考或 GluLookAt 一起使用
我目前正在尝试了解如何在超级书中使用 GLFrame 类,并且根据该书的第四版,从参考帧类派生的相机矩阵应该与 GluLookAt 相同,
当我添加这些
cameraFrame.SetForwardVector(-0.5f, 0.0f,-0.5f);
cameraFrame.Normalize();
行时相机看起来方向正确,偏航角为 45 度(我这样做对吗?)
但是当我添加这个时,
cameraFrame.SetForwardVector(0.0f, 0.5f,-0.5f);
相机看起来就像被设置为(0.0f,0.0f,1.0f)
这是为什么!这已经让我发疯三天了。也许我没有正确传递向量,但我不确定如何传递 x,y 360 度来查看(向前)位置/向量。在传递向量之前是否必须对其进行归一化?
最终我希望做完整的鼠标外观(FPS 风格),但现在只要理解为什么我不能让相机简单地向上倾斜将是一个好的开始。
谢谢!
这是现场代码。
// Called to draw scene
void RenderScene(void)
{
// Color values
static GLfloat vFloorColor[] = { 0.0f, 1.0f, 0.0f, 1.0f};
static GLfloat vTorusColor[] = { 1.0f, 0.0f, 0.0f, 1.0f };
static GLfloat vSphereColor[] = { 0.0f, 0.0f, 1.0f, 1.0f };
// Time Based animation
static CStopWatch rotTimer;
float yRot = rotTimer.GetElapsedSeconds() * 60.0f;
// Clear the color and depth buffers
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Save the current modelview matrix (the identity matrix)
modelViewMatrix.PushMatrix();
M3DMatrix44f mCamera;
/////////
///////// My Code
cameraFrame.SetForwardVector(-0.5f,-0.5f,-0.5f);
cameraFrame.Normalize();
///////// End of my code
cameraFrame.GetCameraMatrix(mCamera);
modelViewMatrix.PushMatrix(mCamera);
// Transform the light position into eye coordinates
M3DVector4f vLightPos = { 0.0f, 10.0f, 5.0f, 1.0f };
M3DVector4f vLightEyePos;
m3dTransformVector4(vLightEyePos, vLightPos, mCamera);
// Draw the ground
shaderManager.UseStockShader(GLT_SHADER_FLAT,
transformPipeline.GetModelViewProjectionMatrix(),
vFloorColor);
floorBatch.Draw();
for(int i = 0; i < NUM_SPHERES; i++) {
modelViewMatrix.PushMatrix();
modelViewMatrix.MultMatrix(spheres[i]);
shaderManager.UseStockShader(GLT_SHADER_POINT_LIGHT_DIFF, transformPipeline.GetModelViewMatrix(),
transformPipeline.GetProjectionMatrix(), vLightEyePos, vSphereColor);
sphereBatch.Draw();
modelViewMatrix.PopMatrix();
}
// Draw the spinning Torus
modelViewMatrix.Translate(0.0f, 0.0f, -2.5f);
// Save the Translation
modelViewMatrix.PushMatrix();
// Apply a rotation and draw the torus
modelViewMatrix.Rotate(yRot, 0.0f, 1.0f, 0.0f);
shaderManager.UseStockShader(GLT_SHADER_POINT_LIGHT_DIFF, transformPipeline.GetModelViewMatrix(),
transformPipeline.GetProjectionMatrix(), vLightEyePos, vTorusColor);
torusBatch.Draw();
modelViewMatrix.PopMatrix(); // "Erase" the Rotation from before
// Apply another rotation, followed by a translation, then draw the sphere
modelViewMatrix.Rotate(yRot * -2.0f, 0.0f, 1.0f, 0.0f);
modelViewMatrix.Translate(0.8f, 0.0f, 0.0f);
shaderManager.UseStockShader(GLT_SHADER_POINT_LIGHT_DIFF, transformPipeline.GetModelViewMatrix(),
transformPipeline.GetProjectionMatrix(), vLightEyePos, vSphereColor);
sphereBatch.Draw();
// Restore the previous modleview matrix (the identity matrix)
modelViewMatrix.PopMatrix();
modelViewMatrix.PopMatrix();
// Do the buffer Swap
glutSwapBuffers();
// Tell GLUT to do it again
glutPostRedisplay();
}
I'm currently trying to understand how to use the GLFrame Class in the superbible book, and acording to the 4th edition of the book, the camera matrix derived from the Frame of reference class should work the same as GluLookAt
When I add these lines
cameraFrame.SetForwardVector(-0.5f, 0.0f,-0.5f);
cameraFrame.Normalize();
The camera looks in the correct direction, yaw at 45 Degrees (Am I doing that right!)
However when I add this
cameraFrame.SetForwardVector(0.0f, 0.5f,-0.5f);
The camera just looks as if it was set to (0.0f, 0.0f, 1.0f)
Why is this! It's been driving me mad for three days. Maybe I'm not passing in the vectors correctly, but I'm not sure how to pass in x,y 360 degrees for the look at (forward) location/vector. Do the vectors have to be normalized before passing them in?
Eventually I hope to do full mouse look (FPS style) , but for now just understanding why I can't make the camera simply pitch up would be a good start.
Thansk!
Here is the code in situ.
// Called to draw scene
void RenderScene(void)
{
// Color values
static GLfloat vFloorColor[] = { 0.0f, 1.0f, 0.0f, 1.0f};
static GLfloat vTorusColor[] = { 1.0f, 0.0f, 0.0f, 1.0f };
static GLfloat vSphereColor[] = { 0.0f, 0.0f, 1.0f, 1.0f };
// Time Based animation
static CStopWatch rotTimer;
float yRot = rotTimer.GetElapsedSeconds() * 60.0f;
// Clear the color and depth buffers
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Save the current modelview matrix (the identity matrix)
modelViewMatrix.PushMatrix();
M3DMatrix44f mCamera;
/////////
///////// My Code
cameraFrame.SetForwardVector(-0.5f,-0.5f,-0.5f);
cameraFrame.Normalize();
///////// End of my code
cameraFrame.GetCameraMatrix(mCamera);
modelViewMatrix.PushMatrix(mCamera);
// Transform the light position into eye coordinates
M3DVector4f vLightPos = { 0.0f, 10.0f, 5.0f, 1.0f };
M3DVector4f vLightEyePos;
m3dTransformVector4(vLightEyePos, vLightPos, mCamera);
// Draw the ground
shaderManager.UseStockShader(GLT_SHADER_FLAT,
transformPipeline.GetModelViewProjectionMatrix(),
vFloorColor);
floorBatch.Draw();
for(int i = 0; i < NUM_SPHERES; i++) {
modelViewMatrix.PushMatrix();
modelViewMatrix.MultMatrix(spheres[i]);
shaderManager.UseStockShader(GLT_SHADER_POINT_LIGHT_DIFF, transformPipeline.GetModelViewMatrix(),
transformPipeline.GetProjectionMatrix(), vLightEyePos, vSphereColor);
sphereBatch.Draw();
modelViewMatrix.PopMatrix();
}
// Draw the spinning Torus
modelViewMatrix.Translate(0.0f, 0.0f, -2.5f);
// Save the Translation
modelViewMatrix.PushMatrix();
// Apply a rotation and draw the torus
modelViewMatrix.Rotate(yRot, 0.0f, 1.0f, 0.0f);
shaderManager.UseStockShader(GLT_SHADER_POINT_LIGHT_DIFF, transformPipeline.GetModelViewMatrix(),
transformPipeline.GetProjectionMatrix(), vLightEyePos, vTorusColor);
torusBatch.Draw();
modelViewMatrix.PopMatrix(); // "Erase" the Rotation from before
// Apply another rotation, followed by a translation, then draw the sphere
modelViewMatrix.Rotate(yRot * -2.0f, 0.0f, 1.0f, 0.0f);
modelViewMatrix.Translate(0.8f, 0.0f, 0.0f);
shaderManager.UseStockShader(GLT_SHADER_POINT_LIGHT_DIFF, transformPipeline.GetModelViewMatrix(),
transformPipeline.GetProjectionMatrix(), vLightEyePos, vSphereColor);
sphereBatch.Draw();
// Restore the previous modleview matrix (the identity matrix)
modelViewMatrix.PopMatrix();
modelViewMatrix.PopMatrix();
// Do the buffer Swap
glutSwapBuffers();
// Tell GLUT to do it again
glutPostRedisplay();
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
感谢大家的回答,但我遇到的问题是这样的。在openGL超级圣经中,我使用了他们内置的参考系类,我遇到的问题是两个函数,一个称为rotate,和rotateWorld。
我需要使用旋转进行向上/向下移动,使用旋转世界进行左右移动。这使得相机表现正常(飞行相机)。
这是有道理的,因为无论您向上/向下看哪里,您都希望整个世界始终围绕垂直轴旋转。唷!
Thanks for everyones answers but the problem I had was this. In the openGL super bible, I was using their built in frame of reference class, and the problem I had was two functions, one called rotate, and rotateWorld.
I needed to use rotate for up/down movement, and rotateWorld for left right movement. This made the camera behave correctly (fly camera).
It makes sense as regardless of where you are looking up/down, you want the whole world to always spin around the vertical axis. Phew!