用于比较 NUnit 中文本文件的单元测试
我有一个类可以处理 2 个 xml 文件并生成一个文本文件。
我想编写一堆单元/集成测试,这些测试可以单独通过或失败,该类执行以下操作:
- 对于输入 A 和 B,生成输出。
- 将生成文件的内容与预期输出的内容进行比较
- 当实际内容与预期内容不同时,失败并显示一些有关差异的有用信息。
下面是该课程的原型以及我在单元测试中的第一次尝试。
是否有我应该用于此类测试的模式,或者人们是否倾向于编写无数的 TestX() 函数?
是否有更好的方法来消除 NUnit 中的文本文件差异?我应该嵌入文本文件差异算法吗?
class ReportGenerator
{
string Generate(string inputPathA, string inputPathB)
{
//do stuff
}
}
[TextFixture]
public class ReportGeneratorTests
{
static Diff(string pathToExpectedResult, string pathToActualResult)
{
using (StreamReader rs1 = File.OpenText(pathToExpectedResult))
{
using (StreamReader rs2 = File.OpenText(pathToActualResult))
{
string actualContents = rs2.ReadToEnd();
string expectedContents = rs1.ReadToEnd();
//this works, but the output could be a LOT more useful.
Assert.AreEqual(expectedContents, actualContents);
}
}
}
static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
[Test]
public void TestX()
{
TestGenerate("x1.xml", "x2.xml", "x-expected.txt");
}
[Test]
public void TestY()
{
TestGenerate("y1.xml", "y2.xml", "y-expected.txt");
}
//etc...
}
更新
我对测试 diff 功能不感兴趣。 我只是想用它来产生更具可读性的失败。
I have a class that processes a 2 xml files and produces a text file.
I would like to write a bunch of unit / integration tests that can individually pass or fail for this class that do the following:
- For input A and B, generate the output.
- Compare the contents of the generated file to the contents expected output
- When the actual contents differ from the expected contents, fail and display some useful information about the differences.
Below is the prototype for the class along with my first stab at unit tests.
Is there a pattern I should be using for this sort of testing, or do people tend to write zillions of TestX() functions?
Is there a better way to coax text-file differences from NUnit? Should I embed a textfile diff algorithm?
class ReportGenerator
{
string Generate(string inputPathA, string inputPathB)
{
//do stuff
}
}
[TextFixture]
public class ReportGeneratorTests
{
static Diff(string pathToExpectedResult, string pathToActualResult)
{
using (StreamReader rs1 = File.OpenText(pathToExpectedResult))
{
using (StreamReader rs2 = File.OpenText(pathToActualResult))
{
string actualContents = rs2.ReadToEnd();
string expectedContents = rs1.ReadToEnd();
//this works, but the output could be a LOT more useful.
Assert.AreEqual(expectedContents, actualContents);
}
}
}
static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
[Test]
public void TestX()
{
TestGenerate("x1.xml", "x2.xml", "x-expected.txt");
}
[Test]
public void TestY()
{
TestGenerate("y1.xml", "y2.xml", "y-expected.txt");
}
//etc...
}
Update
I'm not interested in testing the diff functionality. I just want to use it to produce more readable failures.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
对于使用不同数据的多个测试,请使用 NUnit RowTest 扩展:
As for the multiple tests with different data, use the NUnit RowTest extension:
您可能要求针对“黄金”数据进行测试。 我不知道这种测试是否有全球范围内接受的特定术语,但我们就是这样做的。
创建基础夹具类。 它基本上有“void DoTest(string fileName)”,它将特定文件读入内存,执行抽象转换方法“string Transform(string text)”,然后从同一位置读取 fileName.gold 并将转换后的文本与预期进行比较。 如果内容不同,则会抛出异常。 抛出的异常包含第一个差异的行号以及预期行和实际行的文本。 由于文本是稳定的,这通常足以立即发现问题。 请务必用“预期:”和“实际:”标记行,否则在查看测试结果时您将永远猜测哪个是哪个。
然后,您将拥有特定的测试装置,在其中实现正确工作的 Transform 方法,然后进行如下所示的测试:
失败测试的名称将立即告诉您什么是损坏的。 当然,您可以使用行测试对类似的测试进行分组。 在许多情况下进行单独的测试也很有帮助,例如忽略测试、与同事沟通测试等。 创建一个可以在一秒钟内为您创建测试的代码片段并不是什么大问题,您将花费更多时间准备数据。
然后,您还需要一些测试数据以及基本夹具找到它的方法,请务必为项目设置有关它的规则。 如果测试失败,将实际输出转储到 gold 附近的文件中,如果测试通过则将其删除。 这样您就可以在需要时使用 diff 工具。 当没有找到黄金数据时,测试失败并显示相应的消息,但实际输出无论如何都会写入,因此您可以检查它是否正确并将其复制为“黄金”。
You are probably asking for the testing against "gold" data. I don't know if there is specific term for this kind of testing accepted world-wide, but this is how we do it.
Create base fixture class. It basically has "void DoTest(string fileName)", which will read specific file into memory, execute abstract transformation method "string Transform(string text)", then read fileName.gold from the same place and compare transformed text with what was expected. If content is different, it throws exception. Exception thrown contains line number of the first difference as well as text of expected and actual line. As text is stable, this is usually enough information to spot the problem right away. Be sure to mark lines with "Expected:" and "Actual:", or you will be guessing forever which is which when looking at test results.
Then, you will have specific test fixtures, where you implement Transform method which does right job, and then have tests which look like this:
Name of the failed test will instantly tell you what is broken. Of course, you can use row testing to group similar tests. Having separate tests also helps in a number of situations like ignoring tests, communicating tests to colleagues and so on. It is not a big deal to create a snippet which will create test for you in a second, you will spend much more time preparing data.
Then you will also need some test data and a way your base fixture will find it, be sure to set up rules about it for the project. If test fails, dump actual output to the file near the gold, and erase it if test pass. This way you can use diff tool when needed. When there is no gold data found, test fails with appropriate message, but actual output is written anyway, so you can check that it is correct and copy it to become "gold".
您可以自己解析两个输入流,保留行和列的计数并比较内容,而不是调用 .AreEqual。 一旦发现差异,您就可以生成一条消息,例如...
您可以选择通过显示多行输出来增强它
,通常,我通常只会通过我的代码生成您拥有的两个流之一。 另一个我会从测试/文本文件中获取,通过眼睛或其他方法验证所包含的数据是正确的!
Rather than call .AreEqual you could parse the two input streams yourself, keep a count of line and column and compare the contents. As soon as you find a difference, you can generate a message like...
You could optionally enhance that by displaying multiple lines of output
Note, as a rule, I'd generally only generate through my code one of the two streams you have. The other I'd grab from a test/text file, having verified by eye or other method that the data contained is correct!
我可能会编写一个包含循环的单元测试。 在循环内,我读取 2 个 xml 文件和一个 diff 文件,然后比较 xml 文件(不将其写入磁盘)并将其与从磁盘读取的 diff 文件进行比较。 文件将被编号,例如 a1.xml、b1.xml、diff1.txt ; a2.xml、b2.xml、diff2.txt ; a3.xml、b3.xml、diff3.txt等,当找不到下一个数字时循环停止。
然后,您只需添加新的文本文件即可编写新的测试。
I would probably write a single unit test that contains a loop. Inside the loop, I'd read 2 xml files and a diff file, and then diff the xml files (without writing it to disk) and compare it to the diff file read from disk. The files would be numbered, e.g. a1.xml, b1.xml, diff1.txt ; a2.xml, b2.xml, diff2.txt ; a3.xml, b3.xml, diff3.txt, etc., and the loop stops when it doesn't find the next number.
Then, you can write new tests just by adding new text files.
我可能会使用 XmlReader 来迭代文件并比较它们。 当我遇到差异时,我会显示文件不同位置的 XPath。
PS:但实际上,对我来说,只需将整个文件简单地读取到一个字符串并比较两个字符串就足够了。 对于报告来说,看到测试失败就足够了。 然后,当我进行调试时,我通常使用 Araxis Merge 来比较文件以查看确切位置我有问题。
I would probably use XmlReader to iterate through the files and compare them. When I hit a difference I would display an XPath to the location where the files are different.
PS: But in reality it was always enough for me to just do a simple read of the whole file to a string and compare the two strings. For the reporting it is enough to see that the test failed. Then when I do the debugging I usually diff the files using Araxis Merge to see where exactly I have issues.