你们的持续集成是如何运作的?
我正在构建一个 CI 服务器,并且非常希望能够获得真实的体验以及人们正在使用的内容的概述。
那么,您的构建流程是什么? 是否有这样的情况:
- 每小时一小时用于代码和测试,
- 每天另一小时用于构建 msi 和代码指标
- 等,
还有,您的完整构建过程使用什么? 您是否使用以下内容:
- team city、
- msbuild、
- nunit - 用于测试、
- ncover - 用于测试覆盖率、
- ndepend - 用于代码指标、
- sandcastle - 用于代码注释的文档、
- testcomplete - 用于 QA 测试
- 等?
分享! ;)
I'm building a CI server and would really appreciate to get real experiences, and an overview on what are people using.
So, what are your build processes? Is there something like:
- one hourly for code and tests,
- another daily for build msi and code metrics,
- etc.,
and also, what does your complete build process use? Do you use something like:
- team city,
- msbuild,
- nunit - for tests,
- ncover - for test coverage,
- ndepend - for code metrics,
- sandcastle - for documentation by the code comments,
- testcomplete - for QA tests,
- etc.?
Share! ;)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(8)
我们在最近的 CITCON 北美(持续集成和测试会议),我们都分享了我们的经验并尝试整理了从简单 CI 到非常成熟的 CI 和发布系统的路线图。
原始会议记录为 此处。 以及 Flickr 照片流。
清理版本位于Urbancode 博客也是如此。
澳大利亚人在布里斯班 CITCON 上重新讨论了这个话题,并且 pencast 已经发布了
希望其中一些资源很有用。
We had a similar conversation at the most recent CITCON North America (Continuous Integration and Testing conference) where we all shared our experiences and tried to put together a road map from simple CI to very built out CI and release systems.
The original conference notes are here. Along with a Flickr photostream.
A cleaned up version is available at the urbancode blog as well.
The Aussies revisited the topic at CITCON Brisbane and a pencast of that is available
Hope some of those resources are useful.
对于 Java,我们有一个 Hudson 实例检查 SVN 存储库,对于每个提交,都有一个构建,其中所有内容都被编译,并且所有测试单元都使用 Maven2。 此外,Hudson 还连接到 Sonar 的实例,它告诉我们有关编码风格和测试覆盖率的统计信息。
声纳截图 http://nemo.sonarsource.org/charts/trends/60175?sids=1024412,1025601,1026859,1073764,1348107,2255284&metrics=complexity,强制性%5Fviolations%5F密度,线条,覆盖率&format=png&ts=1244661473034
甜蜜:)
For Java, we have an instance of Hudson checking for commits in the SVN repository, for every commit there is a build in which everything is compiled and all the test units are run using Maven2. Also the Hudson is connected to an instance of Sonar which tell us stats about coding style and testing coverage.
Sonar screenshot http://nemo.sonarsource.org/charts/trends/60175?sids=1024412,1025601,1026859,1073764,1348107,2255284&metrics=complexity,mandatory%5Fviolations%5Fdensity,lines,coverage&format=png&ts=1244661473034
Sweet :)
在我之前的项目中,我们曾经有两个 luntbuild 服务器和一个 SVN 服务器。
第一台 luntbuild 机器用于构建项目 - 每次提交的增量构建 + 单元测试,然后在夜间进行干净构建 + 单元测试 + 完整安装打包。
第二台 luntbuild 机器用作集成测试的测试平台。 一旦第一台机器完成了夜间安装的构建,它就会接受它,将其部署在自身上并运行全套集成测试(基于 junit 的 swing gui 驱动程序),因此每天早上测试工程师都会进行安装一份健全性检查报告,以便他们可以决定是否要采用新版本。
On my previous project we used to have two luntbuild servers plus an SVN server.
First luntbuild machine was used for building the project - incremental build + unit tests per each commit and then clean build + unit tests + complete install packaging during the night.
Second luntbuild machine was used as a testing rig for integration testing. As soon as the first machine finished building the nightly install it would pick that up, deploy it on itself and the run the full suite of integration tests (junit based driver of swing gui), so each morning testing engineers would get an install along with a report of a sanity check so they could decide if they want to take the new build or not.
构建流程 - 我们有一个大型代码库的 4 个当前活跃分支,我们持续运行构建。 对于每个分支,我们将构建分为两个阶段:
我们的构建过程由 Zed Builds And Bugs 协调,包括 Ant、Make、Maven、JUnit、Findbugs、shell脚本(历史),跨 Windows、Linux、AIX、HP 和 Solaris。
我们目前正在包含更多历史趋势和统计数据的汇总,以便我们可以从更高的层面了解开发过程的进展情况。
Build Processes - we have 4 currently active branches of a large code-base that we run builds for continuously. For each branch we have the builds broken down into two stages:
Our build process is coordinated by Zed Builds And Bugs and includes Ant, Make, Maven, JUnit, Findbugs, shell scripts (historical), across Windows, Linux, AIX, HP, and Solaris.
We are currently in the process of including more roll-ups of historical trending and statistics so that we can see from a higher level how the dev process is going.
我们使用 CruiseControl.net 作为我们的 CI 服务器并与 nant 结合使用。 大多数构建(我们有大约 30 个构建)都是在更改时触发的。 一些不太重要的重型构建每晚只触发一次,这也适用于清理大部分正常构建的维护构建。
对于我们的 C/C++ 代码构建,我们使用专有的构建系统,该系统能够将代码构建分发到公司中可用的每台机器(类似于 IncrediBuild,但更灵活)。 对于我们的 C# 构建,我们直接调用 devenv.com,但我们使用 NUnit 来运行单元测试。 我们的 C++ 单元测试使用我们自己的框架,运行它们会生成与 NUnit 非常相似的 xml 结果。 对于一些额外的代码检查,我们每晚运行 pclint。 目前还没有完成代码覆盖,这有点遗憾。
我们还使用该系统来准备我们产品的最终版本。 这只是第一步,之后还需要一些手动操作。
We're using CruiseControl.net as our CI server in combination with nant. Most builds (we have around 30 builds) are triggered on changes. Some less important heavy builds are only triggered once each night, this also goes for the maintenance builds which clean most of the normal builds.
For our C/C++ code builds we're using a proprietary build system that is able to distribute code builds to every machine available in the company (like IncrediBuild, but much more flexible). For our C# builds we directly call devenv.com, but we use NUnit to run the unit tests. Our C++ unit tests are using our own framework, running them results in xml that's very similar to NUnit's. For some extra code checks we run pclint each night. For now, no code coverage is done yet, which is a bit of a shame.
We're also using the system to prepare the finals build of our product. It's just the first step, it still needs some manual actions afterwards.
就我而言(内部设计/构建/支持的 CB 系统),提交给给定 CB 配置所针对的树中的 VCS,会自动对 CB 请求进行排队(CB 运行时到达的多个请求会折叠成一个,这当前 CB 进程完成后将立即运行)。
每个 CB 实例通过执行其配置执行的构建和测试步骤来响应 CB 请求(将它们并行地分配给所有 CB 实例共享的分布式服务器“云”),记录构建和测试结果,并且偶尔(不比配置的频率更频繁)启动“繁重测试”(可能会运行很长时间并且不会阻止即将到来的 CB 请求 - 繁重测试已完全分叉,尽管日志非常清楚地表明它们针对哪个构建跑)。
“同步到头部”(“头部”在其他 VCS 中将是“主干”;-),对于不属于 CB 跟踪的树的一部分的依赖项,可能每次都会发生(这些将是轻量级的、非生产的)关键的或实验性的构建),或者仅针对非常明确的集成请求(这是另一个极端,对于生产关键的构建/项目的“发布分支”),或者具有中等容忍度。
我认为,这不是发布工程实践的顶峰,但在其选项范围内,它对我们来说效果很好,适用于各种关键性、依赖性严重性的项目,&c;-)。
In my case (an in-house designed/built/supported CB system), commits to the VCS in the tree targeted by a given CB config automatically queue a CB request (multiple requests arriving while a CB is running get collapsed into one, which will run as soon as the current CB process is done).
Each CB instance responds to a CB request by performing the build and test steps it's configured to do (farming them out in parallel to a "cloud" of distributed servers shared by all CB instrances), logging the build and test results, and occasionally (not more often than a configured frequency) launching "heavy tests" (which may run for a VERY long time and will NOT block oncoming CB requests -- heavy tests are forked off completely, though the logs make it very clear exactly against which build they ran).
"sync to head" (that "head" would be "trunk" in other VCS's;-), for dependencies that are not part of the tree tracked by a CB, may happen every time (these would be lightweight, non-production-critical, or experimental builds), or only on very explicit integration requests (that's the other extreme, for "release branches" for builds/projects that ARE production-critical), or with intermediate tolerance.
Not the apex of release engineering practice, I think, but in its range of options it works well for us, for a really wide variety of projects of very diverse criticality, dependency-heaviness, &c;-).
Jenkin 是持续集成 (CI) 的最佳工具。 CI 只不过是频繁地将代码集成到存储库 (SCM) 中。 此外,将 SCM 集成到 jenkins 中以构建代码。
可以在jenkins中设置轮询频率。 这样,每当在 SCM 中进行并提交更改时,Jenkins 都会尝试进行构建。 这种方式是有效的..持续集成。
Jenkin is the best tool for Continous-Integration (CI). CI is nothing but the frequently integrating the code in your repository (SCM). Further, Integrating SCM into jenkins for building of your code.
You can set the polling frequency in jenkins. so that, whenever the changes are made and commited in SCM, Jenkins will try to make a build. This way is works.. continous integration.
Instagram 的 CI 的工作方式如下:
在 Instagram,我们每天部署后端代码 30-50 次……每当工程师提交更改以掌握……在大多数情况下,无需人工参与。 这可能听起来很疯狂,尤其是在我们的规模下,但它确实有效。 这篇文章讨论了我们如何实现这个系统并使其顺利运行......
The CI of Instagram works like this:
At Instagram, we deploy our backend code 30-50 times a day... whenever engineers commit changes to master... with no human involvement in most cases. This may sound crazy, especially at our scale, but it works really well. This post talks about how we implemented this system and got it working smoothly...