Openvino,如何获取检测结果?

发布于 2025-01-12 01:54:12 字数 1897 浏览 3 评论 0原文

我将 yoloV5 权重转换为 ONNX,并在 netron 中加载 onnx 文件。

这些是我的模型的属性:

在此处输入图像描述

该模型已使用模型优化器进行转换,以便与 openvino 一起使用。

我的理解(如果我错了请纠正我)“输出”是检测结果,345、403、461是网络中的中间输出。

我似乎无法理解如何获得检测结果(检测到的类..)和边界框的数据...

这是我的代码:

 ie = IECore()
    devices = ie.available_devices
    for device in devices:
        device_name = ie.get_metric(device_name=device, metric_name="FULL_DEVICE_NAME")
        print(f"{device}: {device_name}")
    classification_model_xml = "best.xml"
    t1 = time_sync()
    net = ie.read_network(model=classification_model_xml)
    exec_net = ie.load_network(network=net, device_name="CPU")
    input_layer = next(iter(net.input_info))
    output_layer = net.outputs['output']
    image_filename = "test.jpg"
    image = cv2.imread(image_filename)
    print(f"input layout: {net.input_info[input_layer].layout}")
    print(f"input precision: {net.input_info[input_layer].precision}")
    print(f"input shape: {net.input_info[input_layer].tensor_desc.dims}")
    print(f"output layout: {output_layer.layout}")
    print(f"output precision: {output_layer.precision}")
    print(f"output shape: {output_layer.shape}")
    N, C, H, W = net.input_info[input_layer].tensor_desc.dims
    print(N, C, H, W)
    image.shape
    input_data = np.expand_dims(np.transpose(image, (2, 0, 1)), 0).astype(np.float32)
    input_data.shape

    input_key = next(iter(exec_net.input_info))
    output_key = next(iter(exec_net.outputs.keys()))
    result = exec_net.infer(inputs={input_key: input_data})
    result_index = np.argmax(result)
    t2 = time_sync()
    output = result[4]
    result_index = np.argmax(result)
    print(result_index)
    self.LabelTimeMs.setText("{:.2f}".format((t2-t1)*1000))

如何访问识别的类和边界框数据?

I convertet my yoloV5 weights to ONNX, and loaded the onnx file in netron.

these are the properties of my model:

enter image description here

This Model has been converted with model optimizer to be used with openvino.

Its my understanding (please correct me if I am wrong) that "output" is the detection result, and 345, 403, 461 are intermediate outputs in the network..

I dont seem to be able to understand how to get the detection result (detected class..) and the data for the boundingbox...

this is my code:

 ie = IECore()
    devices = ie.available_devices
    for device in devices:
        device_name = ie.get_metric(device_name=device, metric_name="FULL_DEVICE_NAME")
        print(f"{device}: {device_name}")
    classification_model_xml = "best.xml"
    t1 = time_sync()
    net = ie.read_network(model=classification_model_xml)
    exec_net = ie.load_network(network=net, device_name="CPU")
    input_layer = next(iter(net.input_info))
    output_layer = net.outputs['output']
    image_filename = "test.jpg"
    image = cv2.imread(image_filename)
    print(f"input layout: {net.input_info[input_layer].layout}")
    print(f"input precision: {net.input_info[input_layer].precision}")
    print(f"input shape: {net.input_info[input_layer].tensor_desc.dims}")
    print(f"output layout: {output_layer.layout}")
    print(f"output precision: {output_layer.precision}")
    print(f"output shape: {output_layer.shape}")
    N, C, H, W = net.input_info[input_layer].tensor_desc.dims
    print(N, C, H, W)
    image.shape
    input_data = np.expand_dims(np.transpose(image, (2, 0, 1)), 0).astype(np.float32)
    input_data.shape

    input_key = next(iter(exec_net.input_info))
    output_key = next(iter(exec_net.outputs.keys()))
    result = exec_net.infer(inputs={input_key: input_data})
    result_index = np.argmax(result)
    t2 = time_sync()
    output = result[4]
    result_index = np.argmax(result)
    print(result_index)
    self.LabelTimeMs.setText("{:.2f}".format((t2-t1)*1000))

how can I access the recognized class and the boudning box data?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

胡大本事 2025-01-19 01:54:12

从 2020.4 版本开始,OpenVINO™ 支持读取原生 ONNX 模型,因此您无需不必使用模型优化器将 ONNX 模型转换为 IR。

为了确保您的模型确实可以与 OpenVINO 一起使用,您可以使用 OpenVINO 基准 Python 工具。使用您的模型运行此命令时,您不应收到任何错误。 (只要没有错误,您可以忽略警告)。

建议使用 OpenVINO 推理引擎示例,特别是如果您是初学者。这个对象检测 Python 演示可能适合您的用例。

您可以参考示例源代码来了解如何使用 OpenVINO 推理引擎 API、边界框的创建、模型的处理、传播方法等。从这里,您可以根据您的目标调整代码。

Starting from the 2020.4 release, OpenVINO™ supports reading native ONNX models, so you don't have to convert the ONNX model into IR using Model Optimizer.

To ensure your model is actually workable with OpenVINO, you may run it with OpenVINO Benchmark Python Tool. You should not receive any error when running this with your model. (You can ignore warnings as long as there's no error).

It is recommended to use OpenVINO Inference Engine Samples especially if you are a beginner. This Object Detection Python Demo might be suitable for your use case.

You may refer to the sample source code to see how the OpenVINO Inference Engine API is used, the creation of bounding boxes, the handling of models, propagation methods, etc. From here, you could adapt the code according to your aim.

家住魔仙堡 2025-01-19 01:54:12

不久前,OpenVINO 推理示例代码已添加到 Ultralytics 存储库中,您可能会发现它很有用:

https://github.com/ultralytics/yolov5/pull/6057“OpenVINO 导出”(2021 年 12 月 22 日)

https://github.com/ultralytics/yolov5/pull/6179 “添加 OpenVINO 推理” (2022 年 1 月 4 日)

https://github.com/ultralytics/yolov5/pull/6739“YOLOv5 v6.1 版本”(2022 年 2 月 22 日)

Not that long ago, OpenVINO-inference sample code has been added to the Ultralytics repo, which you might find useful:

https://github.com/ultralytics/yolov5/pull/6057 "OpenVINO Export" (Dec 22, 2021)

https://github.com/ultralytics/yolov5/pull/6179 "Add OpenVINO inference" (Jan 4, 2022)

https://github.com/ultralytics/yolov5/pull/6739 "YOLOv5 v6.1 release" (Feb-22, 2022)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文