OpenVino API 2.0无法使用Yolov4读取动态输入批次
我开发了一个QT应用程序来让用户选择一个DICOM文件,然后将显示推理结果。
我使用dcmread
将DICOM映像读为多个片段的图像。
例如,单个DICOM图像可以转换为60个JPG图像。
以下是我输入DICOM文件的方式。
dcm_file = "1037973"
ds = dcmread(dcm_file, force=True)
ds.PixelRepresentation = 0
ds_arr = ds.pixel_array
core = ov.Core()
model = core.read_model(model="frozen_darknet_yolov4_model.xml")
model.reshape([ds_arr.shape[0], ds_arr.shape[1], ds_arr.shape[2], 3])
compiled_model = core.compile_model(model, "CPU")
infer_request = compiled_model.create_infer_request()
input_tensor = ov.Tensor(array=ds_arr, shared_memory=True)
#infer_request.set_input_tensor(input_tensor)
infer_request.start_async()
infer_request.wait()
output = infer_request.get_output_tensor()
print(output)
我使用model.Reshape
使我的yolov4模型与批处理,heigh,我的输入文件的宽度拟合。
但是以下错误似乎我不能让我的批次超过1。
Traceback (most recent call last):
File "C:\Users\john0\Desktop\hf_inference_tool\controller.py", line 90, in show_inference_result
yolov4_inference_engine(gv.gInImgPath)
File "C:\Users\john0\Desktop\hf_inference_tool\inference.py", line 117, in yolov4_inference_engine
output = infer_request.get_output_tensor()
RuntimeError: get_output_tensor() must be called on a function with exactly one parameter.
如何正确使用API 2.0中的动态输入?
我的环境是Windows 11,带有OpenVino_2022.1.0.643版本。
I develop a Qt app to let user choose a DICOM file, then it will show the inference result.
I use dcmread
to read a DICOM image as many slices of images.
For example, single DICOM image can convert to 60 JPG images.
The following is how I input a DICOM file.
dcm_file = "1037973"
ds = dcmread(dcm_file, force=True)
ds.PixelRepresentation = 0
ds_arr = ds.pixel_array
core = ov.Core()
model = core.read_model(model="frozen_darknet_yolov4_model.xml")
model.reshape([ds_arr.shape[0], ds_arr.shape[1], ds_arr.shape[2], 3])
compiled_model = core.compile_model(model, "CPU")
infer_request = compiled_model.create_infer_request()
input_tensor = ov.Tensor(array=ds_arr, shared_memory=True)
#infer_request.set_input_tensor(input_tensor)
infer_request.start_async()
infer_request.wait()
output = infer_request.get_output_tensor()
print(output)
I use model.reshape
to make my YOLOv4 model fit with the batch, heigh, width of my input file.
But the below error seems like I can't let my batch more than 1.
Traceback (most recent call last):
File "C:\Users\john0\Desktop\hf_inference_tool\controller.py", line 90, in show_inference_result
yolov4_inference_engine(gv.gInImgPath)
File "C:\Users\john0\Desktop\hf_inference_tool\inference.py", line 117, in yolov4_inference_engine
output = infer_request.get_output_tensor()
RuntimeError: get_output_tensor() must be called on a function with exactly one parameter.
How can I use dynamic input in API 2.0 correctly?
My environment is Windows 11 with openvino_2022.1.0.643 version.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
ov :: peasurrequest :: get_output_tensor 无参数的方法只能用于只有一个输出的模型。
使用 ov :: peasurrequest :: get_output_tensor 带有参数(index:int)的方法,因为您的模型具有三个输出。
The ov::InferRequest::get_output_tensor method without arguments can be used for model with only one output.
Use ov::InferRequest::get_output_tensor method with argument (index: int) as your model has three outputs.