CVPixelBufferRef:视频缓冲区和深度缓冲区具有不同的方向

发布于 2025-01-12 22:36:29 字数 1403 浏览 0 评论 0原文

现在我正在 iOS 上使用深度相机,因为我想测量帧中某些点到相机的距离。

我在相机解决方案中完成了所有必要的设置,现在我手中有两个 CVPixelBufferRef - 一个包含像素数据,另一个包含深度数据。

这就是我从 AVCaptureDataOutputSynchronizer 获取两个缓冲区的方式:

- (void)dataOutputSynchronizer:(AVCaptureDataOutputSynchronizer *)synchronizer didOutputSynchronizedDataCollection:(AVCaptureSynchronizedDataCollection *)synchronizedDataCollection
{
    
    AVCaptureSynchronizedDepthData *syncedDepthData = (AVCaptureSynchronizedDepthData *)[synchronizedDataCollection synchronizedDataForCaptureOutput:depthDataOutput];
    AVCaptureSynchronizedSampleBufferData *syncedVideoData = (AVCaptureSynchronizedSampleBufferData *)[synchronizedDataCollection synchronizedDataForCaptureOutput:dataOutput];
    
    if (syncedDepthData.depthDataWasDropped || syncedVideoData.sampleBufferWasDropped) {
        return;
    }
    
    AVDepthData *depthData = syncedDepthData.depthData;
    CVPixelBufferRef depthPixelBuffer = depthData.depthDataMap;
    
    CMSampleBufferRef sampleBuffer = syncedVideoData.sampleBuffer;
    
    if (!CMSampleBufferDataIsReady(sampleBuffer)) {
        return;
    }
    //... code continues
}

在获取任何深度数据之前,我决定检查缓冲区的尺寸是否对齐。我发现,我的带有像素数据的缓冲区的尺寸为480x640(垂直,就像我的应用程序的方向)和带有深度数据的缓冲区 > 尺寸为 640x480(水平)。

显然,缓冲区是不同的,我无法将像素与深度值匹配。我需要以某种方式旋转我的深度缓冲区吗?这是一个已知问题吗?

请告知我应该如何解决这个问题。提前致谢!

Right now I'm working with Depth camera at iOS since I want to measure distance to the camera of certain points at the frame.

I did all necessary setup in my camera solution and now I have two CVPixelBufferRef in my hands - one with pixel data and one with depth data.

This is how I fetch both buffers from AVCaptureDataOutputSynchronizer:

- (void)dataOutputSynchronizer:(AVCaptureDataOutputSynchronizer *)synchronizer didOutputSynchronizedDataCollection:(AVCaptureSynchronizedDataCollection *)synchronizedDataCollection
{
    
    AVCaptureSynchronizedDepthData *syncedDepthData = (AVCaptureSynchronizedDepthData *)[synchronizedDataCollection synchronizedDataForCaptureOutput:depthDataOutput];
    AVCaptureSynchronizedSampleBufferData *syncedVideoData = (AVCaptureSynchronizedSampleBufferData *)[synchronizedDataCollection synchronizedDataForCaptureOutput:dataOutput];
    
    if (syncedDepthData.depthDataWasDropped || syncedVideoData.sampleBufferWasDropped) {
        return;
    }
    
    AVDepthData *depthData = syncedDepthData.depthData;
    CVPixelBufferRef depthPixelBuffer = depthData.depthDataMap;
    
    CMSampleBufferRef sampleBuffer = syncedVideoData.sampleBuffer;
    
    if (!CMSampleBufferDataIsReady(sampleBuffer)) {
        return;
    }
    //... code continues
}

Before getting any depth data, I decided to check if dimensions of my buffers align. And I have found out, that my buffer with pixel data has dimensions 480x640 (vertical, like the orientation of my app) and buffer with depth data has dimensions 640x480 (horizontal).

Obviously, buffers are different and I can not match pixels to depth values. Do I need to rotate my depth buffer somehow? Is this a known issue?

Please advise how should I solve this problem. Thanks in advance!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

兮颜 2025-01-19 22:36:29

是的,我也看到了,希望这有帮助。以弧度表示的角度。

extension CVPixelBuffer {
    func transformedImage(targetSize: CGSize, rotationAngle: CGFloat) -> CIImage? {
        let image = CIImage(cvPixelBuffer: self, options: [:])
        let scaleFactor = Float(targetSize.width) / Float(image.extent.width)
        return image.transformed(by: CGAffineTransform(rotationAngle: rotationAngle)).applyingFilter("CIBicubicScaleTransform", parameters: ["inputScale": scaleFactor])
    }
}

呼叫方:我使用角度为:-(.pi/2)

let depthMap = pixelBuffer.transformedImage(targetSize: desiredSize, rotationAngle: -(.pi/2))

Yes, I too see it, Hope this helps. Angle in radians.

extension CVPixelBuffer {
    func transformedImage(targetSize: CGSize, rotationAngle: CGFloat) -> CIImage? {
        let image = CIImage(cvPixelBuffer: self, options: [:])
        let scaleFactor = Float(targetSize.width) / Float(image.extent.width)
        return image.transformed(by: CGAffineTransform(rotationAngle: rotationAngle)).applyingFilter("CIBicubicScaleTransform", parameters: ["inputScale": scaleFactor])
    }
}

caller side: I used angle as: -(.pi/2)

let depthMap = pixelBuffer.transformedImage(targetSize: desiredSize, rotationAngle: -(.pi/2))
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文