Include sample files in your test project, and test using those. I tend to build the files into the test assembly and use Assembly.GetManifestResourceStream to pass it into the code. Using a Stream or TextReader in the API also means you can do very small tests using MemoryStream or StringReader with the data in the test code itself. (Unless you need to worry about detecting encodings, using a TextReader is probably more appropriate than a Stream.)
You could do it all with StringReader, but in my experience if you end up with several lines of test data it can get quite confusing - separate files can make it easier to see the data involved.
Note that this doesn't check that the output of the external systems hasn't changed - as Pontus says, that would involve system/integration tests. However, you don't want to do most of your testing at that level in my experience. You should have a mixture of tests at different levels, but the higher the level of the test, the longer it's likely to take to run - and the harder it may be to set up.
You may want tests which only test the external systems: have a piece of sample data which you expect to receive from the external systems, and have one unit test of your code that tests it behaves appropriately, and one "external" test which calls the external systems and checks that they produce that exact file. That way, you'll very quickly be able to tell if a failure is due to your code changing or the external systems changing.
This doesn't sound like test driven development or unit testing at all: what you're describing is integration monitoring and/or system testing of the external systems. Are you describing a production environment or a development environment scenario? If the text files change, who should adapt (your consumer or the producing system)?
Whatever you do, make sure you have a clearly defined contract specifying the format and content of the text files which form the interface between your system and the external ones. If possible, implement monitoring functions in the production environments which trigger a warning if a source system deviates from the contract.
Is there a specification for the format of the flat files?
If so, you should include sample files that have each feature of the specification, and write a test for each feature.
If the flat files do not conform to a specification, you can't really do TDD on them - you might always have a new file with an unknown in it. In this case you would have to write your own specification, (based on observation/research) and TDD against that. But you would still be open to unknown data breaking your code.
The moral? Make sure you have at least a working practice specification.
From a .NET perspective, flat file are essentially strings, so I would design most of my API around testing input and output strings.
If those strings get too large, you can embed them in separate files, but it is always better testing practice to reduce each test case to the bare minimum necessary to reproduce/exetute a certain feature.
发布评论
评论(4)
在您的测试项目中包含示例文件,并使用这些文件进行测试。我倾向于将文件构建到测试程序集中并使用
Assembly.GetManifestResourceStream< /code>
将其传递到代码中。在 API 中使用
Stream
或TextReader
还意味着您可以使用MemoryStream
或StringReader
进行非常小的测试测试代码本身的数据。 (除非您需要担心检测编码,否则使用TextReader
可能比Stream
更合适。)您可以使用 all 来完成这一切code>StringReader,但根据我的经验,如果您最终得到几行测试数据,它可能会变得非常混乱 - 单独的文件可以更轻松地查看所涉及的数据。
请注意,这不会检查外部系统的输出是否未更改 - 正如 Pontus 所说,这将涉及系统/集成测试。但是,根据我的经验,您不想在该级别进行大部分测试。您应该混合使用不同级别的测试,但测试级别越高,运行时间可能就越长,而且设置起来也就越困难。
您可能想要仅测试外部系统的测试:拥有您期望从外部系统接收的一段示例数据,并进行一个单元测试测试其行为是否正确的代码,以及调用外部系统并检查它们是否生成确切文件的“外部”测试。这样,您将很快能够判断故障是否是由于代码更改或外部系统更改造成的。
Include sample files in your test project, and test using those. I tend to build the files into the test assembly and use
Assembly.GetManifestResourceStream
to pass it into the code. Using aStream
orTextReader
in the API also means you can do very small tests usingMemoryStream
orStringReader
with the data in the test code itself. (Unless you need to worry about detecting encodings, using aTextReader
is probably more appropriate than aStream
.)You could do it all with
StringReader
, but in my experience if you end up with several lines of test data it can get quite confusing - separate files can make it easier to see the data involved.Note that this doesn't check that the output of the external systems hasn't changed - as Pontus says, that would involve system/integration tests. However, you don't want to do most of your testing at that level in my experience. You should have a mixture of tests at different levels, but the higher the level of the test, the longer it's likely to take to run - and the harder it may be to set up.
You may want tests which only test the external systems: have a piece of sample data which you expect to receive from the external systems, and have one unit test of your code that tests it behaves appropriately, and one "external" test which calls the external systems and checks that they produce that exact file. That way, you'll very quickly be able to tell if a failure is due to your code changing or the external systems changing.
这听起来根本不像测试驱动开发或单元测试:您所描述的是集成监控和/或
无论您做什么,请确保您有一个明确定义的合同,指定文本文件的格式和内容,这些文本文件构成您的系统和外部系统之间的接口。如果可能,在生产环境中实施监控功能,如果源系统偏离合同,则会触发警告。
This doesn't sound like test driven development or unit testing at all: what you're describing is integration monitoring and/or system testing of the external systems. Are you describing a production environment or a development environment scenario? If the text files change, who should adapt (your consumer or the producing system)?
Whatever you do, make sure you have a clearly defined contract specifying the format and content of the text files which form the interface between your system and the external ones. If possible, implement monitoring functions in the production environments which trigger a warning if a source system deviates from the contract.
平面文件的格式有规范吗?
如果是这样,您应该包含具有规范的每个功能的示例文件,并为每个功能编写测试。
如果平面文件不符合规范,您就无法真正对其进行 TDD - 您可能始终会得到一个包含未知内容的新文件。在这种情况下,您必须编写自己的规范(基于观察/研究)和 TDD。但您仍然会对破坏您的代码的未知数据持开放态度。
道德?确保您至少有一份工作实践规范。
Is there a specification for the format of the flat files?
If so, you should include sample files that have each feature of the specification, and write a test for each feature.
If the flat files do not conform to a specification, you can't really do TDD on them - you might always have a new file with an unknown in it. In this case you would have to write your own specification, (based on observation/research) and TDD against that. But you would still be open to unknown data breaking your code.
The moral? Make sure you have at least a working practice specification.
从 .NET 的角度来看,平面文件本质上是字符串,因此我会围绕测试输入和输出字符串来设计大部分 API。
如果这些字符串变得太大,您可以将它们嵌入到单独的文件中,但将每个测试用例减少到重现/执行特定功能所需的最低限度始终是更好的测试实践。
From a .NET perspective, flat file are essentially strings, so I would design most of my API around testing input and output strings.
If those strings get too large, you can embed them in separate files, but it is always better testing practice to reduce each test case to the bare minimum necessary to reproduce/exetute a certain feature.