R规格大型规格文件组织
我只是想知道其他人如何组织大型规范文件(尤其是模型),其中许多上下文和部分组织在描述块中,以进行验证和其他可以以某种有意义的方式分组的规范。
你们是否将有关模型的所有规格保留在该模型的同一规格文件中,或者以某种方式拆分为模块?
到目前为止,我从来没有太关心这个问题,但我想知道其他人在做什么,因为似乎没有就最佳实践等达成某种共识。
我有一些模型的相当大的规格文件,我想将它们组织成较小的文件,并且不同模型之间几乎没有共享的功能,所以我不确定共享示例是否是解决这个问题的方法(不管可重用性)或者是否有更好的方法。有什么建议吗?
提前致谢。
I was just wondering how others organise large spec files (especially for models) with many contexts and sections organised in describe blocks for validations and other specs that can be grouped in some meaningful way.
Do you guys keep all the specs concerning a model in the same spec file for that model, or do you split into modules in a way or another?
I have never cared too much about this so far but I am wondering what others do, as there doesn't seem to be some sort of agreement around a best practice or such.
I've got some pretty large spec files for some models that I'd like to organise into smaller files, and there is little to no functionality shared across different models so I am not sure whether shared examples would be the way to go about this (regardless reusability) or whether there is some better way. Any suggestions?
Thanks in advance.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
嵌套上下文可以在这里为您提供帮助,但请保持浅层(通常深一层)。每个示例中需要考虑两个变量:给定(起始状态)和正在调用的方法。您可以按方法或状态对事物进行分组:
我已经使用了这两种方法,并且发现它们都工作得非常好和非常糟糕。这两种方法的关键在于,当您从上到下阅读时,示例会讲述一个故事。随着需求的发展,这意味着您需要检查此文件,就像检查实现代码一样。
检查规范是否有意义的一个简单方法是使用文档格式化程序运行它:
这会按顺序吐出所有名称(假设您没有使用 --order rand):
或者
一旦您看到此输出,它就会您非常清楚您所使用的组织是否有意义。
Nested contexts can help you here, but keep it shallow (typically one level deep). There are two variables to consider in each example: givens (starting state) and what method is being invoked. You can group things by method or state:
I've used both approaches and seen them both work really well and really badly. The key to either approach is that the examples tell a story as you read from top to bottom. As requirements evolve, this means you need to review this file, just as you do your implementation code.
An easy way to check that the spec makes sense is to run it with the documentation formatter:
This spits out all the names in order (provided you're not using --order rand):
or
Once you see this output, it'll be pretty clear to you whether the organization you're using makes sense or not.