类模板“std::array”的参数列表失踪了
我尝试遵循 ONNX C++ 交互教程:
https:/ /github.com/ilpropheta/onnxruntime-demo/blob/master/OnnxRuntimeDemo/Linear.cpp
我尝试时遇到了 17 个错误构建 C++ 应用程序控制台。我注意到主要错误与数组有关。代码 E0441 显示
缺少类模板“std::array”的参数列表。需要支持
#include "Linear.h"
#include <onnxruntime_cxx_api.h>
#include <array>
#include <iostream>
using namespace std;
void Demo::RunLinearRegression()
{
// gives access to the underlying API (you can optionally customize log)
// you can create one environment per process (each environment manages an internal thread pool)
Ort::Env env;
// creates an inference session for a certain model
Ort::Session session{ env, LR"(linear.onnx)", Ort::SessionOptions{} };
// Ort::Session gives access to input and output information:
// - count
// - name
// - shape and type
std::cout << "Number of model inputs: " << session.GetInputCount() << "\n";
std::cout << "Number of model outputs: " << session.GetOutputCount() << "\n";
// you can customize how allocation works. Let's just use a default allocator provided by the library
Ort::AllocatorWithDefaultOptions allocator;
// get input and output names
auto* inputName = session.GetInputName(0, allocator);
std::cout << "Input name: " << inputName << "\n";
auto* outputName = session.GetOutputName(0, allocator);
std::cout << "Output name: " << outputName << "\n";
// get input shape
auto inputShape = session.GetInputTypeInfo(0).GetTensorTypeAndShapeInfo().GetShape();
// set some input values
std::vector<float> inputValues = { 4, 5, 6 };
// where to allocate the tensors
auto memoryInfo = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU);
// create the input tensor (this is not a deep copy!)
auto inputOnnxTensor = Ort::Value::CreateTensor<float>(memoryInfo,
inputValues.data(), inputValues.size(),
inputShape.data(), inputShape.size());
// the API needs the array of inputs you set and the array of outputs you get
array inputNames = { inputName };
array outputNames = { outputName };
// finally run the inference!
auto outputValues = session.Run(
Ort::RunOptions{ nullptr }, // e.g. set a verbosity level only for this run
inputNames.data(), &inputOnnxTensor, 1, // input to set
outputNames.data(), 1); // output to take
// extract first (and only) output
auto& output1 = outputValues[0];
const auto* floats = output1.GetTensorMutableData<float>();
const auto floatsCount = output1.GetTensorTypeAndShapeInfo().GetElementCount();
// just print the output values
std::copy_n(floats, floatsCount, ostream_iterator<float>(cout, " "));
// closing boilerplate
allocator.Free(inputName);
allocator.Free(outputName);
}
I tried to follow the tutorial of ONNX C++ interence:
https://github.com/ilpropheta/onnxruntime-demo/blob/master/OnnxRuntimeDemo/Linear.cpp
I got 17 errors when I tried to build C++ app console. I noticed the main error is revelent to array. The code E0441 shows
argument list for class template "std::array" is missing. Support needed
#include "Linear.h"
#include <onnxruntime_cxx_api.h>
#include <array>
#include <iostream>
using namespace std;
void Demo::RunLinearRegression()
{
// gives access to the underlying API (you can optionally customize log)
// you can create one environment per process (each environment manages an internal thread pool)
Ort::Env env;
// creates an inference session for a certain model
Ort::Session session{ env, LR"(linear.onnx)", Ort::SessionOptions{} };
// Ort::Session gives access to input and output information:
// - count
// - name
// - shape and type
std::cout << "Number of model inputs: " << session.GetInputCount() << "\n";
std::cout << "Number of model outputs: " << session.GetOutputCount() << "\n";
// you can customize how allocation works. Let's just use a default allocator provided by the library
Ort::AllocatorWithDefaultOptions allocator;
// get input and output names
auto* inputName = session.GetInputName(0, allocator);
std::cout << "Input name: " << inputName << "\n";
auto* outputName = session.GetOutputName(0, allocator);
std::cout << "Output name: " << outputName << "\n";
// get input shape
auto inputShape = session.GetInputTypeInfo(0).GetTensorTypeAndShapeInfo().GetShape();
// set some input values
std::vector<float> inputValues = { 4, 5, 6 };
// where to allocate the tensors
auto memoryInfo = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU);
// create the input tensor (this is not a deep copy!)
auto inputOnnxTensor = Ort::Value::CreateTensor<float>(memoryInfo,
inputValues.data(), inputValues.size(),
inputShape.data(), inputShape.size());
// the API needs the array of inputs you set and the array of outputs you get
array inputNames = { inputName };
array outputNames = { outputName };
// finally run the inference!
auto outputValues = session.Run(
Ort::RunOptions{ nullptr }, // e.g. set a verbosity level only for this run
inputNames.data(), &inputOnnxTensor, 1, // input to set
outputNames.data(), 1); // output to take
// extract first (and only) output
auto& output1 = outputValues[0];
const auto* floats = output1.GetTensorMutableData<float>();
const auto floatsCount = output1.GetTensorTypeAndShapeInfo().GetElementCount();
// just print the output values
std::copy_n(floats, floatsCount, ostream_iterator<float>(cout, " "));
// closing boilerplate
allocator.Free(inputName);
allocator.Free(outputName);
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
只需将
inputNames
和outputNames
的array
更改为std::vector
即可。std::array 不起作用,因为它需要声明中的类型和数组大小 检查此文档。
因此,要使 std::array 起作用,您需要将其声明如下:
std::array;输入名称 = {输入名称}
Just change
array
tostd::vector<const char*>
for bothinputNames
andoutputNames
.std::array does not work because it requires a type and array size in the declaration check this docs.
so to make std::array works, you need to declare it as following:
std::array<std::string, 1> inputNames = {inputName}