如何使用 JUnit 测试解释器?
我正在使用 JUnit 框架用 Java 中的某种编程语言为解释器编写测试。为此,我创建了大量测试用例,其中大多数包含测试语言的代码片段。由于这些片段通常很小,因此将它们嵌入到 Java 代码中很方便。然而,Java 不支持多行字符串文字,这使得代码片段由于转义序列和分割较长字符串文字的必要性而变得有点模糊,例如:
String output = run("let a := 21;\n" +
"let b := 21;\n" +
"print a + b;");
assertEquals(output, "42");
理想情况下,我想要这样的东西:
String output = run("""
let a := 21;
let b := 21;
print a + b;
""");
assertEquals(output, "42");
一种可能的解决方案是移动代码片段到外部文件并引用相应测试用例中的每个文件。然而,这增加了显着的维护负担。
另一种解决方案是使用不同的 JVM 语言(例如支持多行字符串文字的 Scala 或 Jython)来编写测试。这将为项目添加新的依赖项,并需要移植现有测试。
有没有其他方法可以保持测试代码片段的清晰度,同时又不增加太多维护?
I am writing tests for an interpreter from some programming language in Java using JUnit framework. To this end I've created a large number of test cases most of them containing code snippets in a language under testing. Since these snippets are normally small it is convenient to embed them in the Java code. However, Java doesn't support multiline string literals which makes the code snippets a bit obscure due to escape sequences and the necessity to split longer string literals, for example:
String output = run("let a := 21;\n" +
"let b := 21;\n" +
"print a + b;");
assertEquals(output, "42");
Ideally I would like something like:
String output = run("""
let a := 21;
let b := 21;
print a + b;
""");
assertEquals(output, "42");
One possible solution is to move the code snippets to the external files and refer each file from corresponding test case. However this adds significant maintenance burden.
Another solution is to use a different JVM language, such as Scala or Jython which support multiline string literals, to write the tests. This will add a new dependency to the project and will require to port existing tests.
Is there any other way to keep the clarity of the test code snippets while not adding too much maintenance?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
过去,将测试用例转移到一个文件对我有用,它也是一个解释器:
testID
、value
、expected result
、type
的测试元素列表> 和描述
。testID
和description
来记录失败的测试。它之所以有效,主要是因为我们有一个通用的、定义良好的解释器接口,就像您的
run
方法一样,因此重构仍然是可能的。在我们的例子中,这并没有增加维护工作量,事实上,我们只需向 XML 文件添加更多元素即可轻松创建新测试。也许这不是使用单元测试的最佳方式,但它对我们来说效果很好。
Moving the test cases to a file worked for me in the past, it was an interpreter as well:
testID
,value
,expected result
,type
, and adescription
.testID
anddescription
to log failing tests.It mainly worked because we had one generic well-defined interface to the interpreter like your
run
method, so refactoring was still possible. In our case this did not increase maintenance effort, in fact we could easily create new tests by just adding more elements to the XML file.Maybe this is not the optimal way in which Unit tests should be used, but it worked well for us.
既然您谈论的是其他 JVM 语言,那么您考虑过 Groovy 吗?您必须添加外部依赖项,但仅在编译/测试时添加(您不必将其放入生产包中),并且它提供多行字符串。在您的情况下,一个主要优点是:它的语法向后兼容 Java(这意味着您不必重写测试)!
Since you are talking about other JVM languages, have you considered Groovy? You would have to add an external dependency, but only at compile/test time (you don't have to put it in your production package), and it provides multiline strings. And one major advantage in your case : its syntax is backwards compatible with Java (meaning you won't have to rewrite your tests)!
我过去曾这样做过。我做了一些类似于家庭建议的事情,我使用了包含测试及其预期结果的外部文件,但使用了 @Parameterized 测试运行程序。
这里我们正在运行
test1()
&test2()
对 /temp 中的每个文件进行一次,参数为文件名和文件内容。测试类将被实例化,并为您在使用 @Parameters 注释的方法中添加到列表中的每个项目进行调用。使用此测试运行程序,您可以在失败时重新运行特定文件;大多数 IDE 支持重新运行单个失败的测试。 @Parameterized 的缺点是没有任何方法可以明智地识别测试,以便名称出现在 Eclipse JUnit 插件中。您得到的只是 0、1、2 等。但至少您可以重新运行失败的测试。
正如 home 所说,良好的日志记录对于正确识别失败的测试并帮助调试非常重要,尤其是在 IDE 之外运行时。
I have done this in the past. I've done something similar to what was suggested by home, I used external file(s) containing the tests and their expected results, but using the @Parameterized test runner.
Here we are running
test1()
&test2()
once for each file in /temp, with the parameters of the filename and the contents of the file. The Test Class is instantiated and called for each item that you add into the list in the method annotated with @Parameters.Using this test runner, you can rerun a particular file if it fails; most IDEs support rerunning a single failed test. The disadvantage of @Parameterized is that there isn't any way to sensibly identify the tests so that the names appear in the Eclipse JUnit plugin. All you get is 0, 1, 2, etc. But at least you can rerun the failed tests.
As home says, good logging is important to identify the failing tests correctly and to aid debugging especially when running outside the IDE.