ONNXRuntime 程序运行两次后崩溃
我正在编写一个应该在 VS2013 中运行的程序,该程序接收两个图像,使用 C++ 通过 ONNX 模型运行它,并返回模型的输出,
因为我正在使用图像,我在 VS2019 中编写该程序,并创建一个将运行的 DLL在 VS2013 中,
我能够将它与一张图像一起使用,并且效果很好。当我尝试使用两个图像时,运行该程序一次可以正常工作,但如果我尝试连续执行两次,则在尝试删除 std::wstring
时,它会因断言而崩溃。我试图追踪它的起源,但我能找到的最接近的位置是在 Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.29.30133\crt\src\vcruntime
中, function
_CRT_SECURITYCRITICAL_ATTRIBUTE
void __CRTDECL operator delete(void* const block, size_t const) noexcept
{
operator delete(block);
}
VS2019中的代码是:
#include <iostream>
#include <fstream>
#include <Windows.h>
Mat MeanOverChannels(Mat m) {
/*
* input: cv::Mat with more than one channel (=3)
* output: cv::Mat with one channel, that is the average over channels
*/
Size size = m.size();
int channels = m.channels();
Mat res(size, CV_64FC1);
for (int i = 0; i < size.height; i++) {
for (int j = 0; j < size.width; j++) {
double avg = 0;
auto cur = m.at<Vec3b>(i, j);
for (int c = 0; c < channels; c++)
{
avg += cur[c];
}
avg /= channels;
res.at<double>(i, j) = avg;
}
}
return res;
}
cv::Mat GetInputNormalized(string imgpath
, int& original_height, int& original_width
, int input_height, int input_width) {
/*
* input: path to image, and refernces to save the origianl size
* output: image from the path after resizing and normalizing
*/
//read input
Mat img = imread(imgpath, IMREAD_COLOR); // can use IMREAD_UNCHANGED
Size s = img.size();
original_height = s.height;
original_width = s.width;
//mean over axis=2, you can comment out if it is not needed for you
Mat img_mean = MeanOverChannels(img);
//resize down
int down_width = input_width;
int down_height = input_height;
Mat resized_down;
resize(img_mean, resized_down, Size(down_width, down_height), INTER_LINEAR);
//can return resized_down, from here you can customize the input
Mat img2float;
resized_down.convertTo(img2float, CV_64FC1);
//normalize pixels : p->(p-127.5)/127.5
Mat imgNorm = (img2float - 127.5) / 127.5;
return imgNorm;
}
bool Net::RunNet(std::wstring modelName, std::string inPath1, std::string inPath2, float& iRes) {
/*********this model assumes two inputs and one output************/
/////path to the onnx model/////
wchar_t buffer[MAX_PATH];
GetModuleFileNameW(NULL, buffer, MAX_PATH);
const wchar_t* endChar = LR"(\\/)";
std::wstring::size_type pos = std::wstring(buffer).find_last_of(endChar);
std::wstring modelPath;
modelPath = std::wstring(buffer).substr(0, pos) + endChar + modelName;
bool success = false;
/////variables to run the model/////
Ort::Env env;
Ort::Session session{ env, modelPath.c_str(), Ort::SessionOptions{} };
Ort::AllocatorWithDefaultOptions allocator;
auto memoryInfo = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU);
auto* inputName1 = session.GetInputName(0, allocator);
auto* inputName2 = session.GetInputName(1, allocator);
auto* outputName = session.GetOutputName(0, allocator);
array inputNames = { inputName1, inputName2 };
array outputNames = { outputName };
auto inputShape1 = session.GetInputTypeInfo(0).GetTensorTypeAndShapeInfo().GetShape();
auto inputShape2 = session.GetInputTypeInfo(1).GetTensorTypeAndShapeInfo().GetShape();
for (int i = 0;i < inputShape1.size(); ++i) {
inputShape1[i] = inputShape1[i] > 0 ? inputShape1[i] : 1; //in case inputShape[0] is -1 (None in python)
}
for (int i = 0;i < inputShape2.size(); ++i) {
inputShape2[i] = inputShape2[i] > 0 ? inputShape2[i] : 1; //in case inputShape[0] is -1 (None in python)
}
int input_height1 = inputShape1[1], input_width1 = inputShape1[2];
int input_height2 = inputShape2[1], input_width2 = inputShape2[2];
/////getting input to the net/////
Mat img1, img2;
int origHeight1, origWidth1
, origHeight2, origWidth2;
img1 = GetInputNormalized(inPath1
, origHeight1, origWidth1
, input_height1, input_width1
);
img2 = GetInputNormalized(inPath2
, origHeight2, origWidth2
, input_height2, input_width2
);
vf inputValues1 = MatTo1DVector(img1);
vf inputValues2 = MatTo1DVector(img2);
//// create the input tensor (this is not a deep copy!)
auto inputOnnxTensor1 = Ort::Value::CreateTensor<float>(memoryInfo,
inputValues1.data(), inputValues1.size(),
inputShape1.data(), inputShape1.size());
auto inputOnnxTensor2 = Ort::Value::CreateTensor<float>(memoryInfo,
inputValues2.data(), inputValues2.size(),
inputShape2.data(), inputShape2.size());
array input_tensor = { std::move(inputOnnxTensor1), std::move(inputOnnxTensor2) };
///////Executing the model/////
auto outputValues = session.Run(
Ort::RunOptions{ nullptr }, // e.g. set a verbosity level only for this run
inputNames.data(), input_tensor.data(), input_tensor.size(), // input to set
outputNames.data(), 1); // output to take
auto& output1 = outputValues[0];
const auto* floats = output1.GetTensorMutableData<float>();
const auto floatsCount = output1.GetTensorTypeAndShapeInfo().GetElementCount();
float res = *floats;
iRes = res;
allocator.Free(inputName1);
allocator.Free(inputName2);
allocator.Free(outputName);
success = true;
return success;
}
我导出的是NetFactory的一个函数,它创建了Net的unique_ptr,通过这个工厂我们创建了一个Net的实例, _net
,我执行_net->RunNetSimCompare(modelName, inPath1, inPath2, res);
并在执行后返回res
。
问题可能出在哪里?
I am writing a program that should run in VS2013, that receives two images, runs it through an ONNX model with C++, and return the model's output
because I am using images, I write the program in VS2019, and create a DLL that will run in VS2013
I was able to use it with one image and it worked fine. when I tried to use with two images, running the program once works fine, but if I try to execute it twice in a row, it crashes with an assertion when trying to delete a std::wstring
. I have tried to trace it to the origin but the closest I could get was in Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.29.30133\crt\src\vcruntime
, in the function
_CRT_SECURITYCRITICAL_ATTRIBUTE
void __CRTDECL operator delete(void* const block, size_t const) noexcept
{
operator delete(block);
}
the code in VS2019 is:
#include <iostream>
#include <fstream>
#include <Windows.h>
Mat MeanOverChannels(Mat m) {
/*
* input: cv::Mat with more than one channel (=3)
* output: cv::Mat with one channel, that is the average over channels
*/
Size size = m.size();
int channels = m.channels();
Mat res(size, CV_64FC1);
for (int i = 0; i < size.height; i++) {
for (int j = 0; j < size.width; j++) {
double avg = 0;
auto cur = m.at<Vec3b>(i, j);
for (int c = 0; c < channels; c++)
{
avg += cur[c];
}
avg /= channels;
res.at<double>(i, j) = avg;
}
}
return res;
}
cv::Mat GetInputNormalized(string imgpath
, int& original_height, int& original_width
, int input_height, int input_width) {
/*
* input: path to image, and refernces to save the origianl size
* output: image from the path after resizing and normalizing
*/
//read input
Mat img = imread(imgpath, IMREAD_COLOR); // can use IMREAD_UNCHANGED
Size s = img.size();
original_height = s.height;
original_width = s.width;
//mean over axis=2, you can comment out if it is not needed for you
Mat img_mean = MeanOverChannels(img);
//resize down
int down_width = input_width;
int down_height = input_height;
Mat resized_down;
resize(img_mean, resized_down, Size(down_width, down_height), INTER_LINEAR);
//can return resized_down, from here you can customize the input
Mat img2float;
resized_down.convertTo(img2float, CV_64FC1);
//normalize pixels : p->(p-127.5)/127.5
Mat imgNorm = (img2float - 127.5) / 127.5;
return imgNorm;
}
bool Net::RunNet(std::wstring modelName, std::string inPath1, std::string inPath2, float& iRes) {
/*********this model assumes two inputs and one output************/
/////path to the onnx model/////
wchar_t buffer[MAX_PATH];
GetModuleFileNameW(NULL, buffer, MAX_PATH);
const wchar_t* endChar = LR"(\\/)";
std::wstring::size_type pos = std::wstring(buffer).find_last_of(endChar);
std::wstring modelPath;
modelPath = std::wstring(buffer).substr(0, pos) + endChar + modelName;
bool success = false;
/////variables to run the model/////
Ort::Env env;
Ort::Session session{ env, modelPath.c_str(), Ort::SessionOptions{} };
Ort::AllocatorWithDefaultOptions allocator;
auto memoryInfo = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU);
auto* inputName1 = session.GetInputName(0, allocator);
auto* inputName2 = session.GetInputName(1, allocator);
auto* outputName = session.GetOutputName(0, allocator);
array inputNames = { inputName1, inputName2 };
array outputNames = { outputName };
auto inputShape1 = session.GetInputTypeInfo(0).GetTensorTypeAndShapeInfo().GetShape();
auto inputShape2 = session.GetInputTypeInfo(1).GetTensorTypeAndShapeInfo().GetShape();
for (int i = 0;i < inputShape1.size(); ++i) {
inputShape1[i] = inputShape1[i] > 0 ? inputShape1[i] : 1; //in case inputShape[0] is -1 (None in python)
}
for (int i = 0;i < inputShape2.size(); ++i) {
inputShape2[i] = inputShape2[i] > 0 ? inputShape2[i] : 1; //in case inputShape[0] is -1 (None in python)
}
int input_height1 = inputShape1[1], input_width1 = inputShape1[2];
int input_height2 = inputShape2[1], input_width2 = inputShape2[2];
/////getting input to the net/////
Mat img1, img2;
int origHeight1, origWidth1
, origHeight2, origWidth2;
img1 = GetInputNormalized(inPath1
, origHeight1, origWidth1
, input_height1, input_width1
);
img2 = GetInputNormalized(inPath2
, origHeight2, origWidth2
, input_height2, input_width2
);
vf inputValues1 = MatTo1DVector(img1);
vf inputValues2 = MatTo1DVector(img2);
//// create the input tensor (this is not a deep copy!)
auto inputOnnxTensor1 = Ort::Value::CreateTensor<float>(memoryInfo,
inputValues1.data(), inputValues1.size(),
inputShape1.data(), inputShape1.size());
auto inputOnnxTensor2 = Ort::Value::CreateTensor<float>(memoryInfo,
inputValues2.data(), inputValues2.size(),
inputShape2.data(), inputShape2.size());
array input_tensor = { std::move(inputOnnxTensor1), std::move(inputOnnxTensor2) };
///////Executing the model/////
auto outputValues = session.Run(
Ort::RunOptions{ nullptr }, // e.g. set a verbosity level only for this run
inputNames.data(), input_tensor.data(), input_tensor.size(), // input to set
outputNames.data(), 1); // output to take
auto& output1 = outputValues[0];
const auto* floats = output1.GetTensorMutableData<float>();
const auto floatsCount = output1.GetTensorTypeAndShapeInfo().GetElementCount();
float res = *floats;
iRes = res;
allocator.Free(inputName1);
allocator.Free(inputName2);
allocator.Free(outputName);
success = true;
return success;
}
what I export is a function of NetFactory that creates a unique_ptr of Net, and through this factory we create an instance of Net, _net
, and I execute _net->RunNetSimCompare(modelName, inPath1, inPath2, res);
and return res
after the execution.
where can the problem be?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
对于遇到同样问题的人来说,似乎将函数签名更改为以下内容:
修复了它
to those who encounter the same problem, it seems that changing in the function's signature to the following:
fixes it