如何使我的单元测试适应 cmake 和 ctest?
到目前为止,我已经使用了一个临时的单元测试程序 - 基本上是由批处理文件自动运行的全部单元测试程序。尽管其中很多都明确地检查了他们的结果,但还有更多的作弊行为 - 他们将结果转储到版本控制的文本文件中。测试结果中的任何更改都会被颠覆标记,我可以轻松识别更改是什么。许多测试输出点文件或其他某种形式,使我能够获得输出的可视化表示。
问题是我要改用 cmake。使用 cmake 流程意味着使用源外构建,这意味着将结果转储到共享源/构建文件夹中并将其与源一起进行版本控制的便利性实际上并不起作用。
作为替代,我想要做的是告诉单元测试工具在哪里可以找到预期结果的文件(在源树中)并让它进行比较。如果失败,它应该提供实际结果和差异列表。
这是可能的,还是我应该采取完全不同的方法?
显然,我可以忽略 ctest 并只调整我一直对源代码外构建所做的事情。例如,我可以对所有构建所在的文件夹进行版本控制(当然可以自由使用“忽略”)。这是理智的吗?可能不会,因为每个构建最终都会得到预期结果的单独副本。
另外,对于使用 cmake/ctest 进行单元测试的推荐方法的任何建议,我们深表感谢。我在 cmake 上浪费了相当多的时间,不是因为它不好,而是因为我不明白如何最好地使用它。
编辑
最后,我决定让单元测试的 cmake/ctest 方面尽可能简单。为了根据预期结果测试实际情况,我在库中找到了以下函数的家...
bool Check_Results (std::ostream &p_Stream ,
const char *p_Title ,
const char **p_Expected,
const std::ostringstream &p_Actual )
{
std::ostringstream l_Expected_Stream;
while (*p_Expected != 0)
{
l_Expected_Stream << (*p_Expected) << std::endl;
p_Expected++;
}
std::string l_Expected (l_Expected_Stream.str ());
std::string l_Actual (p_Actual.str ());
bool l_Pass = (l_Actual == l_Expected);
p_Stream << "Test: " << p_Title << " : ";
if (l_Pass)
{
p_Stream << "Pass" << std::endl;
}
else
{
p_Stream << "*** FAIL ***" << std::endl;
p_Stream << "===============================================================================" << std::endl;
p_Stream << "Expected Results For: " << p_Title << std::endl;
p_Stream << "-------------------------------------------------------------------------------" << std::endl;
p_Stream << l_Expected;
p_Stream << "===============================================================================" << std::endl;
p_Stream << "Actual Results For: " << p_Title << std::endl;
p_Stream << "-------------------------------------------------------------------------------" << std::endl;
p_Stream << l_Actual;
p_Stream << "===============================================================================" << std::endl;
}
return l_Pass;
}
典型的单元测试现在看起来像...
bool Test0001 ()
{
std::ostringstream l_Actual;
const char* l_Expected [] =
{
"Some",
"Expected",
"Results",
0
};
l_Actual << "Some" << std::endl
<< "Actual" << std::endl
<< "Results" << std::endl;
return Check_Results (std::cout, "0001 - not a sane test", l_Expected, l_Actual);
}
在我需要可重用的数据转储函数的地方,它需要一个类型的参数std::ostream&
,因此它可以转储到实际结果流。
Until now, I've used an improvised unit testing procedure - basically a whole load of unit test programs run automatically by a batch file. Although a lot of these explicitly check their results, a lot more cheat - they dump out results to text files which are versioned. Any change in the test results gets flagged by subversion and I can easily identify what the change was. Many of the tests output dot files or some other form that allows me to get a visual representation of the output.
The trouble is that I'm switching to using cmake. Going with the cmake flow means using out-of-source builds, which means that convenience of dumping results out in a shared source/build folder and versioning them along with the source doesn't really work.
As a replacement, what I'd like to do is to tell the unit test tool where to find files of expected results (in the source tree) and get it to do the comparison. On failure, it should provide the actual results and diff listings.
Is this possible, or should I take a completely different approach?
Obviously, I could ignore ctest and just adapt what I've always done to out-of-source builds. I could version my folder-where-all-the-builds-live, for instance (with liberal use of 'ignore' of course). Is that sane? Probably not, as each build would end up with a separate copy of the expected results.
Also, any advice on the recommended way to do unit testing with cmake/ctest gratefuly received. I wasted a fair bit of time with cmake, not because it's bad, but because I didn't understand how best to work with it.
EDIT
In the end, I decided to keep the cmake/ctest side of the unit testing as simple as possible. To test actual against expected results, I found a home for the following function in my library...
bool Check_Results (std::ostream &p_Stream ,
const char *p_Title ,
const char **p_Expected,
const std::ostringstream &p_Actual )
{
std::ostringstream l_Expected_Stream;
while (*p_Expected != 0)
{
l_Expected_Stream << (*p_Expected) << std::endl;
p_Expected++;
}
std::string l_Expected (l_Expected_Stream.str ());
std::string l_Actual (p_Actual.str ());
bool l_Pass = (l_Actual == l_Expected);
p_Stream << "Test: " << p_Title << " : ";
if (l_Pass)
{
p_Stream << "Pass" << std::endl;
}
else
{
p_Stream << "*** FAIL ***" << std::endl;
p_Stream << "===============================================================================" << std::endl;
p_Stream << "Expected Results For: " << p_Title << std::endl;
p_Stream << "-------------------------------------------------------------------------------" << std::endl;
p_Stream << l_Expected;
p_Stream << "===============================================================================" << std::endl;
p_Stream << "Actual Results For: " << p_Title << std::endl;
p_Stream << "-------------------------------------------------------------------------------" << std::endl;
p_Stream << l_Actual;
p_Stream << "===============================================================================" << std::endl;
}
return l_Pass;
}
A typical unit test now looks something like...
bool Test0001 ()
{
std::ostringstream l_Actual;
const char* l_Expected [] =
{
"Some",
"Expected",
"Results",
0
};
l_Actual << "Some" << std::endl
<< "Actual" << std::endl
<< "Results" << std::endl;
return Check_Results (std::cout, "0001 - not a sane test", l_Expected, l_Actual);
}
Where I need a re-usable data-dumping function, it takes a parameter of type std::ostream&
, so it can dump to an actual-results stream.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我将使用 CMake 的独立脚本模式来运行测试并比较输出。通常,对于单元测试程序,您将编写
add_test(testname testexecutable)
,但您可以运行任何命令作为测试。如果您编写脚本“runtest.cmake”并通过此脚本执行单元测试程序,那么 runtest.cmake 脚本可以执行任何它喜欢的操作 - 包括使用
cmake -E Compare_files
实用程序。您希望在 CMakeLists.txt 文件中包含以下内容:这将运行一个脚本 (cmake -P runtest.cmake) 并定义 2 个变量:TEST_PROG,设置为测试可执行文件的路径,以及 SOURCEDIR,设置为当前源目录。您需要第一个知道要运行哪个程序,第二个知道在哪里可以找到预期的测试结果文件。
runtest.cmake
的内容为:第一个
execute_process
运行测试程序,它将输出“output.txt”。如果有效,则下一个execute_process
将有效运行cmake -E Compare_files output.txt Expected.txt
。文件“expected.txt”是源树中已知的良好结果。如果存在差异,则会出错,以便您可以看到失败的测试。这并没有打印出差异; CMake 中没有隐藏完整的“diff”实现。目前,您使用 Subversion 来查看哪些行已更改,因此一个明显的解决方案是将最后一部分更改为:
这会在失败时使用构建输出覆盖源树,然后在其上运行 svn diff 。问题是您不应该真正以这种方式更改源代码树。当您第二次运行测试时,它就通过了!更好的方法是安装一些视觉差异工具并在输出和预期文件上运行它。
I'd use CMake's standalone scripting mode to run the tests and compare the outputs. Normally for a unit test program, you would write
add_test(testname testexecutable)
, but you may run any command as a test.If you write a script "runtest.cmake" and execute your unit test program via this, then the runtest.cmake script can do anything it likes - including using the
cmake -E compare_files
utility. You want something like the following in your CMakeLists.txt file:This runs a script (cmake -P runtest.cmake) and defines 2 variables: TEST_PROG, set to the path of the test executable, and SOURCEDIR, set to the current source directory. You need the first to know which program to run, the second to know where to find the expected test result files. The contents of
runtest.cmake
would be:The first
execute_process
runs the test program, which will write out "output.txt". If that works, then the nextexecute_process
effectively runscmake -E compare_files output.txt expected.txt
. The file "expected.txt" is the known good result in your source tree. If there are differences, it errors out so you can see the failed test.What this doesn't do is print out the differences; CMake doesn't have a full "diff" implementation hidden away within it. At the moment you use Subversion to see what lines have changed, so an obvious solution is to change the last part to:
This overwrites the source tree with the build output on failure then runs svn diff on it. The problem is that you shouldn't really go changing the source tree in this way. When you run the test a second time, it passes! A better way is to install some visual diff tool and run that on your output and expected file.