导出16位灰度深度图像(1440x1920)-Swift IOS15.3
我想导出有关iOS的捕获的数据(用于机器学习目的)的16位灰度PNG。 我正在使用带有激光雷达传感器的新iPadpro来捕获Datra。 我已经获得了192x256的深度图,我将其缩放为7.5,以匹配我的RGB图像的1440x1920分辨率。这是我到目前为止的代码:
func convertDepthToImg(frame: ARFrame) {
let depthBuffer = frame.sceneDepth?.depthMap
var ciImageDepth:CIImage = CIImage(cvPixelBuffer: depthBuffer!)
// Transform image on pixel level to get the same size as rgb, apply nearest neighbour sampling or linear sampling (depends on performance in network)
let transformation = CGAffineTransform(scaleX: 7.5, y: 7.5)
ciImageDepth = ciImageDepth.samplingLinear()
.transformed(by: combined_transf_matrix)
let contextDepth:CIContext = CIContext(options: nil)
let cgImageDepth:CGImage = contextDepth.createCGImage((ciImageDepth), from: ciImageDepth.extent)!
// convert to required 16 bits gray png img
convertTo16BitGrayPng(image: cgImageDepth)
}
// Function to create vImageBuffer for more functionality on Images
func createVImg(image: CGImage) -> vImage_Buffer? {
guard let vImageBuffer = try? vImage_Buffer(cgImage: image)
else {
return nil
}
return vImageBuffer
}
func convertTo16BitGrayPng(image: CGImage){
let width = 1440
let height = 1920
//create vImageBuffer vor UIImage
var srcBuf = createVImg(image: image)
print("Height: ", String(srcBuf!.height))
print("Width: ", String(srcBuf!.width))
// allocate memory for final size:
let bv = malloc(width * height * 4)!
var db = vImage_Buffer(data: bv,
height: vImagePixelCount(height),
width: vImagePixelCount(width),
rowBytes: width*2)
// create pointer to Buffer that contains the image data
vImageConvert_PlanarFtoPlanar16F(&(srcBuf)!, &db, vImage_Flags(kvImageNoFlags))
let bp = bv.assumingMemoryBound(to: UInt16.self)
let prov = CGDataProvider(data: CFDataCreateWithBytesNoCopy(kCFAllocatorDefault,
bp,
height * width * 4,
kCFAllocatorDefault))!
let cgImage = CGImage(width: width,
height: height,
bitsPerComponent: 5,
bitsPerPixel: 16,
bytesPerRow: 2 * width,
space: CGColorSpace(name: CGColorSpace.linearSRGB)!,
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipFirst.rawValue),
provider: prov,
decode: nil,
shouldInterpolate: false,
intent: .defaultIntent)
// save processed image to documents dir
saveDptToDocs(cgImage: cgImage!, type: "dpt")
}
... save Image to documentpath (works fine)
我使用这个问题回答将我的图像转换为16位并将其保存到我的文档目录中,但是我只能获得24位图像。我真的无法获得16位的出口工作。 我已经在32、64、8甚至24位导出了图像。但是,16位有些棘手吗? 请帮忙。
I'd like to export a 16 bit grayscale PNG of my captured sceneDepth data on iOS (for machine learning purposes).
I am using the new iPadPro with lidar sensor to capture the datra in an ARSession.
I already get the 192x256 depth map, which i scaled up by 7.5 to match the 1440x1920 resolution of my rgb images. This is the code i have so far:
func convertDepthToImg(frame: ARFrame) {
let depthBuffer = frame.sceneDepth?.depthMap
var ciImageDepth:CIImage = CIImage(cvPixelBuffer: depthBuffer!)
// Transform image on pixel level to get the same size as rgb, apply nearest neighbour sampling or linear sampling (depends on performance in network)
let transformation = CGAffineTransform(scaleX: 7.5, y: 7.5)
ciImageDepth = ciImageDepth.samplingLinear()
.transformed(by: combined_transf_matrix)
let contextDepth:CIContext = CIContext(options: nil)
let cgImageDepth:CGImage = contextDepth.createCGImage((ciImageDepth), from: ciImageDepth.extent)!
// convert to required 16 bits gray png img
convertTo16BitGrayPng(image: cgImageDepth)
}
// Function to create vImageBuffer for more functionality on Images
func createVImg(image: CGImage) -> vImage_Buffer? {
guard let vImageBuffer = try? vImage_Buffer(cgImage: image)
else {
return nil
}
return vImageBuffer
}
func convertTo16BitGrayPng(image: CGImage){
let width = 1440
let height = 1920
//create vImageBuffer vor UIImage
var srcBuf = createVImg(image: image)
print("Height: ", String(srcBuf!.height))
print("Width: ", String(srcBuf!.width))
// allocate memory for final size:
let bv = malloc(width * height * 4)!
var db = vImage_Buffer(data: bv,
height: vImagePixelCount(height),
width: vImagePixelCount(width),
rowBytes: width*2)
// create pointer to Buffer that contains the image data
vImageConvert_PlanarFtoPlanar16F(&(srcBuf)!, &db, vImage_Flags(kvImageNoFlags))
let bp = bv.assumingMemoryBound(to: UInt16.self)
let prov = CGDataProvider(data: CFDataCreateWithBytesNoCopy(kCFAllocatorDefault,
bp,
height * width * 4,
kCFAllocatorDefault))!
let cgImage = CGImage(width: width,
height: height,
bitsPerComponent: 5,
bitsPerPixel: 16,
bytesPerRow: 2 * width,
space: CGColorSpace(name: CGColorSpace.linearSRGB)!,
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipFirst.rawValue),
provider: prov,
decode: nil,
shouldInterpolate: false,
intent: .defaultIntent)
// save processed image to documents dir
saveDptToDocs(cgImage: cgImage!, type: "dpt")
}
... save Image to documentpath (works fine)
I used this questions answer to convert my images to 16 bit and save them to my documents directory, but i only get 24 bit images. I really can't get the 16 bit export to work.
I already exported images in 32, 64, 8 and even 24 bit. However, 16 bit is somewhat tricky?
Please help.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
如果您的输出图像为16位灰度,我认为您初始化的颜色空间
cgimage
应该是灰度颜色空间(例如cgcolorspacecreatedevicegray()
)和>
BitsperComponent
和bitsperpixel
都应16
。另外,bitmapinfo
应沿着:If your output image is 16-bit grayscale, I think the colour space you initialise the
CGImage
should be a grayscale color space (e.g.CGColorSpaceCreateDeviceGray()
) and thebitsPerComponent
andbitsPerPixel
should both be16
. Also, thebitmapInfo
should be along the lines of: