在 iOS 中用 Swift 绘制/编辑 CVPixelBuffer 的正确方法
是否有一种标准的高性能方法可以快速编辑/绘制 CVImageBuffer/CVPixelBuffer?
我在网上找到的所有视频编辑演示都将绘图(矩形或文本)覆盖在屏幕上,并且不直接编辑 CVPixelBuffer。
更新我尝试使用 CGContext 但保存的视频不显示上下文绘图
private var adapter: AVAssetWriterInputPixelBufferAdaptor?
extension TrainViewController: CameraFeedManagerDelegate {
func didOutput(sampleBuffer: CMSampleBuffer) {
let time = CMTime(seconds: timestamp - _time, preferredTimescale: CMTimeScale(600))
let pixelBuffer: CVPixelBuffer? = CMSampleBufferGetImageBuffer(sampleBuffer)
guard let context = CGContext(data: CVPixelBufferGetBaseAddress(pixelBuffer),
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),
space: colorSpace,
bitmapInfo: alphaInfo.rawValue)
else {
return nil
}
context.setFillColor(red: 1, green: 0, blue: 0, alpha: 1.0)
context.fillEllipse(in: CGRect(x: 0, y: 0, width: width, height: height))
context.flush()
adapter?.append(pixelBuffer, withPresentationTime: time)
}
}
Is there a standard performant way to edit/draw on a CVImageBuffer/CVPixelBuffer in swift?
All the video editing demos I've found online overlay the drawing (rectangles or text) on the screen and don't directly edit the CVPixelBuffer.
UPDATE I tried using a CGContext but the saved video doesn't show the context drawing
private var adapter: AVAssetWriterInputPixelBufferAdaptor?
extension TrainViewController: CameraFeedManagerDelegate {
func didOutput(sampleBuffer: CMSampleBuffer) {
let time = CMTime(seconds: timestamp - _time, preferredTimescale: CMTimeScale(600))
let pixelBuffer: CVPixelBuffer? = CMSampleBufferGetImageBuffer(sampleBuffer)
guard let context = CGContext(data: CVPixelBufferGetBaseAddress(pixelBuffer),
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),
space: colorSpace,
bitmapInfo: alphaInfo.rawValue)
else {
return nil
}
context.setFillColor(red: 1, green: 0, blue: 0, alpha: 1.0)
context.fillEllipse(in: CGRect(x: 0, y: 0, width: width, height: height))
context.flush()
adapter?.append(pixelBuffer, withPresentationTime: time)
}
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您需要在创建位图
CGContext
之前调用CVPixelBufferLockBaseAddress(pixelBuffer, 0)
,并在完成绘制后调用CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
语境。如果不锁定像素缓冲区,
CVPixelBufferGetBaseAddress()
将返回 NULL。这会导致您的 CGContext 分配新的内存进行绘制,该内存随后被丢弃。还要仔细检查您的色彩空间。很容易混淆您的组件。
例如
You need to call
CVPixelBufferLockBaseAddress(pixelBuffer, 0)
before creating the bitmapCGContext
andCVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
after you have finished drawing to the context.Without locking the pixel buffer,
CVPixelBufferGetBaseAddress()
returns NULL. This causes yourCGContext
to allocate new memory to draw into, which is subsequently discarded.Also double check your colour space. It's easy to mix up your components.
e.g.