如何将 iPhone 相机坐标转换为视图坐标?

发布于 2024-10-06 11:43:06 字数 1321 浏览 2 评论 0原文

我正在制作一个使用 OpenCV 解析 iPhone 摄像头输出并将结果显示在屏幕上的应用程序。我知道iPhone相机是横向读取的,所以在屏幕上显示结果之前需要进行转换。我认为我只需要缩放、旋转 -90 度,然后平移(tx=图像宽度),但这不起作用。我在网上找到了一些示例代码,如下所示,但是对于除了横向左以外的任何方向都不起作用。

从相机视图转换为用户视图的正确方法是什么?

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{
    //lock the pixel buffer
    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
    CVPixelBufferLockBaseAddress( pixelBuffer, 0 ); 

    //declare variables
    int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
    int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);

    //a bunch of OpenCV stuff that finds features in an image and
    // returns a cameraRect

    //translate to view coordinates
    float scaleX = 320 / (float)bufferHeight;
    float scaleY = 460 / (float)bufferWidth;
    deviceRect.origin.x  = 320 - ((cameraRect.y + cameraRect.height) * scaleX); 
    deviceRect.origin.y  =       (cameraRect.x * scaleY);

    CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
}

//elsewhere
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

编辑:通过不起作用,我的意思是屏幕上显示的矩形与预览图层中可见的匹配对象不对应。我正在检测面部,并且根据屏幕上绘制的手机 deviceRect 的方向,与预览层中的面部有一点或很多偏离。在左侧风景中,deviceRect 与可见面完美匹配。

I am making an application that uses OpenCV to parse output from the iPhone camera and display the result on the screen. I know the iPhone camera reads in landscape orientation, so there is a need to convert before displaying the results on the screen. I figured that I would just need to scale, rotate -90 degrees and then translate (tx=width of image), but that is not working. I found some sample code online, which is below, but that is not working either for any orientation other than landscape left.

What is the proper way to convert from camera view to user view?

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{
    //lock the pixel buffer
    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
    CVPixelBufferLockBaseAddress( pixelBuffer, 0 ); 

    //declare variables
    int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
    int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);

    //a bunch of OpenCV stuff that finds features in an image and
    // returns a cameraRect

    //translate to view coordinates
    float scaleX = 320 / (float)bufferHeight;
    float scaleY = 460 / (float)bufferWidth;
    deviceRect.origin.x  = 320 - ((cameraRect.y + cameraRect.height) * scaleX); 
    deviceRect.origin.y  =       (cameraRect.x * scaleY);

    CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
}

//elsewhere
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

Edit: By not working, I mean that the rectangle displayed on the screen does not correspond to the matched object that is visible in the preview layer. I'm detecting faces, and depending on the orientation of the phone deviceRect drawn on the screen is off either a little or a lot from the face that is in the preview layer. In landscape left, deviceRect matches perfectly the visible face.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文