OpenVino API 2.0无法使用Yolov4读取动态输入批次

发布于 2025-01-31 20:55:52 字数 1548 浏览 4 评论 0原文

我开发了一个QT应用程序来让用户选择一个DICOM文件,然后将显示推理结果。

我使用dcmread将DICOM映像读为多个片段的图像。

例如,单个DICOM图像可以转换为60个JPG图像。

以下是我输入DICOM文件的方式。

dcm_file = "1037973"
ds = dcmread(dcm_file, force=True)
ds.PixelRepresentation = 0
ds_arr = ds.pixel_array
        
core = ov.Core()
model = core.read_model(model="frozen_darknet_yolov4_model.xml")
model.reshape([ds_arr.shape[0], ds_arr.shape[1], ds_arr.shape[2], 3])
compiled_model = core.compile_model(model, "CPU")
infer_request = compiled_model.create_infer_request()
input_tensor = ov.Tensor(array=ds_arr, shared_memory=True)
#infer_request.set_input_tensor(input_tensor)
infer_request.start_async()
infer_request.wait()
output = infer_request.get_output_tensor()
print(output)

我使用model.Reshape使我的yolov4模型与批处理,heigh,我的输入文件的宽度拟合。

但是以下错误似乎我不能让我的批次超过1。

Traceback (most recent call last):
  File "C:\Users\john0\Desktop\hf_inference_tool\controller.py", line 90, in show_inference_result
    yolov4_inference_engine(gv.gInImgPath)
  File "C:\Users\john0\Desktop\hf_inference_tool\inference.py", line 117, in yolov4_inference_engine
    output = infer_request.get_output_tensor()
RuntimeError: get_output_tensor() must be called on a function with exactly one parameter.

如何正确使用API​​ 2.0中的动态输入?

我的环境是Windows 11,带有OpenVino_2022.1.0.643版本。

I develop a Qt app to let user choose a DICOM file, then it will show the inference result.

enter image description here

I use dcmread to read a DICOM image as many slices of images.

For example, single DICOM image can convert to 60 JPG images.

The following is how I input a DICOM file.

dcm_file = "1037973"
ds = dcmread(dcm_file, force=True)
ds.PixelRepresentation = 0
ds_arr = ds.pixel_array
        
core = ov.Core()
model = core.read_model(model="frozen_darknet_yolov4_model.xml")
model.reshape([ds_arr.shape[0], ds_arr.shape[1], ds_arr.shape[2], 3])
compiled_model = core.compile_model(model, "CPU")
infer_request = compiled_model.create_infer_request()
input_tensor = ov.Tensor(array=ds_arr, shared_memory=True)
#infer_request.set_input_tensor(input_tensor)
infer_request.start_async()
infer_request.wait()
output = infer_request.get_output_tensor()
print(output)

I use model.reshape to make my YOLOv4 model fit with the batch, heigh, width of my input file.

But the below error seems like I can't let my batch more than 1.

Traceback (most recent call last):
  File "C:\Users\john0\Desktop\hf_inference_tool\controller.py", line 90, in show_inference_result
    yolov4_inference_engine(gv.gInImgPath)
  File "C:\Users\john0\Desktop\hf_inference_tool\inference.py", line 117, in yolov4_inference_engine
    output = infer_request.get_output_tensor()
RuntimeError: get_output_tensor() must be called on a function with exactly one parameter.

How can I use dynamic input in API 2.0 correctly?

My environment is Windows 11 with openvino_2022.1.0.643 version.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

抱着落日 2025-02-07 20:55:52

ov :: peasurrequest :: get_output_tensor 无参数的方法只能用于只有一个输出的模型。

使用 ov :: peasurrequest :: get_output_tensor 带有参数(index:int)的方法,因为您的模型具有三个输出。

output_tensor1 = infer_request.get_output_tensor(0)
output_tensor2 = infer_request.get_output_tensor(1)
output_tensor3 = infer_request.get_output_tensor(2)
print(output_tensor1)
print(output_tensor2)
print(output_tensor3)

The ov::InferRequest::get_output_tensor method without arguments can be used for model with only one output.

Use ov::InferRequest::get_output_tensor method with argument (index: int) as your model has three outputs.

output_tensor1 = infer_request.get_output_tensor(0)
output_tensor2 = infer_request.get_output_tensor(1)
output_tensor3 = infer_request.get_output_tensor(2)
print(output_tensor1)
print(output_tensor2)
print(output_tensor3)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文