CVPixelBufferLockBaseAddress 为什么?使用 AVFoundation 捕获静态图像

发布于 2024-11-17 06:57:32 字数 2003 浏览 3 评论 0原文


我正在编写一个 iPhone 应用程序,它使用 AVFoundation 从相机创建静态图像。 阅读编程指南,我发现了一段几乎满足我需要的代码,因此我尝试“逆向工程”并理解它。
我发现理解将 CMSampleBuffer 转换为图像的部分有些困难。
这是我的理解以及后来的代码。
CMSampleBuffer 表示内存中的一个缓冲区,其中存储带有附加数据的图像。后来我调用函数 CMSampleBufferGetImageBuffer() 来接收仅包含图像数据的 CVImageBuffer。
现在有一个函数我看不懂,只能想象它的功能:CVPixelBufferLockBaseAddress(imageBuffer, 0);我不明白它是否是一个“线程锁”以避免对其进行多次操作,或者是对缓冲区地址的锁定以避免操作期间发生更改(为什么它应该改变?..另一个帧,数据不是被复制的吗?在另一个地方?)。其余的代码对我来说很清楚。
尝试在谷歌上搜索,但仍然没有找到任何有用的信息。
有人可以带来一些光吗?

-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) sampleBuffer{

// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

// Lock the base address of the pixel buffer 
CVPixelBufferLockBaseAddress(imageBuffer, 0); 

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer); 
size_t height = CVPixelBufferGetHeight(imageBuffer); 

// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, 
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);

// Free up the context and color space
CGContextRelease(context); 
CGColorSpaceRelease(colorSpace);

// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];

// Release the Quartz image
CGImageRelease(quartzImage);

return (image);
}

谢谢, 安德里亚

I'm writing an iPhone app that creates still images from the camera using AVFoundation.
Reading the programming guide I've found a code that does almost I need to do, so I'm trying to "reverse engineering" and understand it.
I'm founding some difficulties to understand the part that converts a CMSampleBuffer into an image.
So here is what I understood and later the code.
The CMSampleBuffer represent a buffer in the memory where the image with additional data is stored. Later I call the function CMSampleBufferGetImageBuffer() to receive a CVImageBuffer back with just the image data.
Now there is a function that I didn't understand and I can only imagine its function: CVPixelBufferLockBaseAddress(imageBuffer, 0); I can't understand if it is a "thread lock" to avoid multiple operation on it or a lock to the address of the buffer to avoid changes during operation(and why should it change?..another frame, aren't data copied in another location?). The rest of the code it's clear to me.
Tried to search on google but still didn't find nothing helpful.
Can someone bring some light?

-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) sampleBuffer{

// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

// Lock the base address of the pixel buffer 
CVPixelBufferLockBaseAddress(imageBuffer, 0); 

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer); 
size_t height = CVPixelBufferGetHeight(imageBuffer); 

// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, 
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);

// Free up the context and color space
CGContextRelease(context); 
CGColorSpaceRelease(colorSpace);

// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];

// Release the Quartz image
CGImageRelease(quartzImage);

return (image);
}

Thanks,
Andrea

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

梦巷 2024-11-24 06:57:32

头文件说 CVPixelBufferLockBaseAddress 使内存“可访问”。我不确定这到底意味着什么,但如果你不这样做,CVPixelBufferGetBaseAddress 就会失败,所以你最好这样做。

编辑

简单的答案就是去做。为什么要考虑该图像可能不存在于主内存中,它可能存在于某些 GPU 上某处的纹理中(CoreVideo 也可以在 mac 上工作),甚至与您期望的格式不同,因此您获得的像素实际上是复制。如果没有锁定/解锁或某种开始/结束对,实现就无法知道您何时完成重复像素,因此它们实际上会被泄漏。 CVPixelBufferLockBaseAddress 只是提供了 CoreVideo 范围信息,我不会太在意它。

是的,他们可以简单地从 CVPixelBufferGetBaseAddress 返回像素并完全消除 CVPixelBufferLockBaseAddress。我不知道他们为什么不这样做。

The header file says that CVPixelBufferLockBaseAddress makes the memory "accessible". I'm not sure what that means exactly, but if you don't do it, CVPixelBufferGetBaseAddress fails so you'd better do it.

EDIT

Just do it is the short answer. For why consider that image may not live in main memory, it may live in a texture on some GPU somewhere (CoreVideo works on the mac too) or even be in a different format to what you expect, so the pixels you get are actually a copy. Without Lock/Unlock or some kind of Begin/End pair the implementation has no way to know when you've finished with the duplicate pixels so they would effectively be leaked. CVPixelBufferLockBaseAddress simply gives CoreVideo scope information, I wouldn't get too hung up on it.

Yes, they could have simply returned the pixels from CVPixelBufferGetBaseAddress and eliminate CVPixelBufferLockBaseAddress altogether. I don't know why they didn't do that.

高跟鞋的旋律 2024-11-24 06:57:32

我想提供有关此函数的更多提示,到目前为止我做了一些测试,我可以告诉您。
当您获得基地址时,您可能正在获得某些共享内存资源的地址。如果您打印基地址的地址,这一点就会变得很清楚,这样做您可以看到在获取视频帧时基地址是重复的。
在我的应用程序中,我以特定的间隔获取帧并传递CVImageBufferRef 到一个 NSOperation 子类,该子类转换图像中的缓冲区并将其保存在手机上。在操作开始转换 CVImageBufferRef 之前,我不会锁定像素缓冲区,即使以更高的帧速率推送像素的基地址和 CVImageBufferRef 缓冲区地址之前是相等的NSOperation 及其内部的创建。我只保留 CVImageBufferRef。我期望看到不匹配的引用,即使我没有看到它,我想最好的描述是 CVPixelBufferLockBaseAddress 锁定缓冲区所在的内存部分,使其无法从其他资源访问,因此它将保留相同的数据,直到您解锁为止。

I'd like to give more hints about this function, I made some tests so far and I can tell you that.
When you get the base address you are probably getting the address of some shared memory resource. This becomes clear if you print the address of the base address, doing that you can see that base addresses are repeated while getting video frames.
In my app I take frames at specific intervals and pass the CVImageBufferRef to an NSOperation subclass that converts the buffer in an image and saves it on the phone. I do not lock the pixel buffer until the operation starts to convert the CVImageBufferRef, even if pushing at higher framerates the base address of the pixel and the CVImageBufferRef buffer address are equal before the creation of the NSOperation and inside it. I just retain the CVImageBufferRef. I was expecting to se unmatching references and even if I didn't see it I guess that the best description is that CVPixelBufferLockBaseAddress locks the memory portion where the buffer is located, making it inaccessible from other resources so it will keep the same data, until you unlock it.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文