您的团队对主要版本代码部署执行什么标准?

发布于 2024-07-16 07:21:54 字数 657 浏览 4 评论 0原文

我很好奇其他团队在主要版本中发布(或部署)代码之前要确保采用什么样的标准。

我并不是在寻找每个问题的具体答案,但这里有一个我想要了解的想法。

  • 对于基于服务器的应用程序,您是否确保监控到位? 到什么程度...只是它响应 ping,它可以在任何给定时刻命中其所有依赖项,应用程序实际服务的逻辑是合理的(例如,计算 2+2 的服务实际上返回“4” ")
  • 在代码发布之前您是否需要自动构建脚本? 这意味着,任何开发人员都可以走进一个新盒子,从源代码控制中拉出一些东西,然后开始开发? 当然,考虑到操作系统和 IDE 之类的东西。
  • 对于基于服务器的应用程序来说,自动化部署脚本怎么样?
  • 项目“完成”需要什么级别的文档?
  • 如果系统是基于服务器的,您是否确保为系统的所有主要组件制定了完整的备份计划?
  • 你们执行代码质量标准吗? 将 StyleCop 用于 .NET 或圈复杂度评估。
  • 单元测试? 集成测试? 性能负载测试?
  • 对于如何处理应用程序的错误日志记录,您有标准吗? 错误通知怎么样?

再说一次,不一定要寻找上述任何问题的逐行答案。 简而言之,代码发布必须完成哪些非编码项目才能正式被视为您的团队“完成”?

I'm curious as to what sort of standards other teams make sure is in place before code ships (or deploys) out the door in major releases.

I'm not looking for specific answers to each, but here's an idea of what I'm trying to get an idea of.

  • For server-based apps, do you ensure monitoring is in place? To what degree...just that it responds to ping, that it can hit all of its dependencies at any given moment, that the logic that the app actually services is sound (e.g., a service that calculates 2+2 actually returns "4")
  • Do you require automated build scripts before code is released? Meaning, any dev can walk onto a new box, yank something from source control, and start developing? Given things like an OS and IDE, of course.
  • How about automated deployment scripts, for server-based apps?
  • What level of documentation do you require for a project to be "done?"
  • Do you make dang sure you have a full-fledged backup plan for all of the major components of the system, if it's server-based?
  • Do you enforce code quality standards? Think StyleCop for .NET or cyclomatic complexity evaluations.
  • Unit testing? Integration tests? Performance load testing?
  • Do you have standards for how your application's error logging is handled? How about error notification?

Again, not looking for a line-by-line punchlist of answers to anything above, necessarily. In short, what non-coding items must a code release have completed before it's officially considered "done" for your team?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

叫思念不要吵 2024-07-23 07:21:54

最小:

  1. 单元测试工作
  2. 集成测试工作
  3. 在测试阶段部署 可以
  4. 在测试阶段进行手动简短检查

更好:

  1. 单元测试工作
  2. checkstyle< /a> 好的
  3. 集成测试工作
  4. 指标,如 jmeter 和测试覆盖率通过
  5. 在测试阶段部署,好的
  6. 一些手动测试阶段的测试

最终部署在生产阶段

所有单元和集成测试都会自动工作,最好在持续集成服务器上,例如 CruiseControl 由 antmaven。 开发 Web 服务时,使用 soapui 进行测试效果很好。

如果使用数据库,则会在部署之前完成自动升级(例如 liquibase)。 当使用外部服务时,需要额外的配置测试,以确保 URL 正常(来自应用程序的头请求、数据库连接、wsdl get,...)。
开发 webpps 时,某些页面上的 HTML 验证 会很有用。 手动检查布局(例如使用 browsershots)会很有用。

(Java 开发的所有示例链接)

最后(但并非最不重要的):所有验收测试仍然通过吗? 产品是业主想要的吗? 在进一步进行之前,与他一起对测试系统进行实时审查!

The minimun:

  1. unit tests work
  2. integration tests work
  3. deploy on test stage ok
  4. manual short check on test stage

Better:

  1. unit tests work
  2. checkstyle ok
  3. integration tests work
  4. metrics like jmeter and test coverage passed
  5. deploy on test stage ok
  6. some manual tests on test stage

finally deploy on production stage

All unit and integration tests work automatically, best on a continuous integration server like CruiseControl done by ant or maven. When developing webservices, testing with soapui works fine.

If a database used, automatic upgrade is done (with liquibase for example) before deployment. When external services are used, addidional configuration tests are needed, to ensure URLs are ok (head request from application, database connect, wsdl get, ...).
When developing webpps, a HTML validation on some pages will be usefull. A manual check of the layout (use browsershots for example) would be usefull.

(All example links for Java development)

And last (but not least): are all acceptance tests still passing? Is the product what the owner wants? Make a live review with him on the test system before going further!

撩发小公举 2024-07-23 07:21:54

我主要从事网络开发,因此我的项目可能与您的不同。 就在我的脑海中...

  • 确保所有 Web 服务都是最新的
  • 确保所有数据库脚本/更改/迁移已部署到生产服务器
  • 最小化所有 js 和 css 文件。
  • 确保所有单元/功能/集成/Selenium 测试都通过(我们的目标是在开发时实现 95% 以上的测试覆盖率,因此这些在确定问题时通常相当准确)

还有更多,我知道有,但我可以现在什么也不想。

I mostly do web development, so my items may be different from yours. Just off the top of my head...

  • Ensure all web services are up-to-date
  • Ensure all database scripts/changes/migrations are already deployed to the production server
  • Min all js and css files.
  • Make sure all unit/functional/integration/Selenium tests are passing (We aim for 95%+ test coverage while we're developing, so these are usually pretty accurate in determining a problem)

There's more, I know there is, but I can't think of any right now.

过气美图社 2024-07-23 07:21:54

每个项目都是不同的,但是根据经验,这里是我在让代码投入使用之前尝试完成的核心事情。

排名不分先后:

1) 用户稍后可以找到的版本标识,对于该版本来说必须是唯一的。 (非常典型的是与可分发文件、库和可执行文件相关联的“版本号”,或者用户可以从“关于”对话框中看到的“版本号”。可能是众所周知的寄存器或固件中的偏移量处的数字)

2)确切代码的快照用于生成版本。 (SCM 系统中版本的标签或分支对此很有用)

3) 必须记录并存档重新创建源所需的所有工具(如果没有此功能,步骤 2 中的源的使用将受到限制)

4)实际版本(已发布的确切安装程序的副本,谁知道 7 年后您的工具可能无法构建它,但现在至少您拥有源代码和可安装的版本以用于调查目的)。

5)此发行版与上一个发行版之间的一组记录的更改,又称发行说明(我喜欢使用附加到列表的样式,以便用户可以在一个地方使用所有发行版更改)。

6) 候选发布测试周期完成。 使用可分配的创建负载并使用完整/经过审查的测试计划进行测试,以确保核心功能正常运行,所有新功能都存在并按预期运行。

7) 缺陷跟踪显示所有未完成的项目都被标记为 a) 已修复 b) 非缺陷 c) 已推迟。

您可以根据领域或开发风格添加许多其他步骤,但我想说大多数软件“应该”在每个版本中执行上述步骤。 YMMV。

享受攻克城堡的乐趣。

Each and every project is different, however as a rule of thumb here are the core things that I try to have done prior to letting code go out to the wild.

In no particular order:

1) A version identification in place where it can be found by a user later, this must be unique to this release. (very typically a "version number" associated on the distributable, the libraries and executable, or user visible from an "about" dialog. Could be a number at a well known register or offset in firmware)

2) A snapshot of the exact code used to produce the release. (a label or a branch of the release in the SCM system is good for this)

3) All the tools necessary to recreate the source must be noted and archived (source from step 2 becomes of limited use without this)

4) An archive of the actual release (a copy of the exact installer released, who knows in 7 years your tools may not be able to build it, but now at least you have the source code and an installable at your side for investigation purposes).

5) A set of documented changes between this release version and the previous one aka Release Notes (I like to use the style of appending to the list so that all release changes are available in one place for a user).

6) Candidate release test cycle complete. Using the distributable created load and test using full/vetted test plan to be sure core functionality is operational, all new features are present and operating as intended.

7) Defect tracking shows all outstanding items are flagged as a) fixed b) not a defect c) deferred.

You can sprinkle in many other steps depending upon domain or development style, but I would state that most software "should be" performing the above steps each and every release. YMMV.

Have fun storming the castle.

漫雪独思 2024-07-23 07:21:54
  • Codestyle(自动化)
  • 自动测试(单元和集成测试)
  • 手动测试(包括测试和 Beta 阶段)
  • 白盒渗透测试工具(自动化)
  • 黑盒渗透测试工具(自动化)
  • /日志监控
  • 在推出之前对测试/Beta 阶段进行手动异常 随时恢复到以前的版本
  • 代码审查和 “非法签到”
  • Codestyle (automated)
  • Automated Tests (Unit- & Integrationtests)
  • Manual Tests (including test and beta stages)
  • Whitebox penetration testing tool (automated)
  • Blackbox penetration testing tool (automated)
  • Manual Exception/Logging monitoring on test/beta stages before rollout
  • ability to revert to previous version at any time
  • code review & 'illegal checkins'
若有似无的小暗淡 2024-07-23 07:21:54

对于网络/内部应用程序,除了其他建议之外还有一件事。

确保让运营/部署团队参与进来,这样您就不会交付需要比他们拥有的更多服务器的软件(不要假设推动需求的人已经拥有)。

For web / internal apps one thing in addition to the other suggestions.

Make sure to involve the ops/deployment team so you don't deliver software which requires more servers then they have (don't assume the people pushing the requirements already have).

后来的我们 2024-07-23 07:21:54
  • 检查清单:检查该版本计划的所有新功能、变更请求和错误修复是否已完成。
  • 构建(在构建机器中)在发布模式下编译时不会出现任何警告或错误。
  • 所有自动化单元测试都运行没有错误。
  • 所有消息和图像均已得到产品团队的批准。
  • 性能检查并不比以前的版本差。
  • 完整的(手动)测试计划已经过测试团队的检查,没有错误。
    • 应用程序在多种可能的场景(不同的操作系统、数据库引擎、配置和第三方应用程序)中进行了测试。
    • 应用程序的所有功能都经过测试:我们经常遇到某个功能的更改破坏了另一个不相关的功能的情况,糟糕的事情发生了,所以我们必须将其最小化。
    • 设置或部署也适用于所有场景
    • 安装程序能够升级以前的版本
  • Review the checklist: check that all the new features, change requests and bug fixes planned for the version have been finished.
  • Build (in build machine) compiles without any warning nor error in Release mode.
  • All the automated Unit Tests run without error.
  • All the messages and images have been approved by the product team.
  • Performance checks are not worst than former version.
  • The full (manual) test plan has been checked by the test team without errors.
    • The application is tested in many possible scenarios (different OS, database engines, configurations and third party applications).
    • All the features of the application are tested: many times happened to us that a change in a feature broke another one thought unrelated, shit happens, so we have to minimize it.
    • The setup or deployment works in all the scenarios too
    • The setup is able to upgrade former versions
余生再见 2024-07-23 07:21:54

我们最近发布了一个主要版本,所以这对我来说仍然很新鲜。 我们制作了一个带有 GUI 的 Windows 应用程序,并为其发布了二进制可执行文件,因此我的列表必然与纯 Web 版本的列表有很大不同。

  1. 发布候选者前往测试团队。 他们至少需要几天的时间来玩它。 如果他们发现任何我们认为是阻碍因素的错误,发布就会中止。 这假设您有一个测试团队。 仅当自构建日期起至少已经过去一周后,我们才会清除候选版本。

  2. 所有自动化测试都必须有效并通过。 自动化测试被认为是对现场测试人员的补充。

  3. 任何标记为“阻止程序”的错误都必须在最终版本中得到解决。

    任何标记

  4. 宣传材料必须准备好(在我们的例子中,是网页更新和电子邮件通讯)。 经销商会提前几周收到发布通知,以便他们也可以准备材料。 这主要不是程序员关心的问题,但我们确实会检查营销声明的准确性。

  5. 必须更新许可以反映我们正在使用的任何版权保护。 我们的测试版和发行版使用不同的许可模式,这种更改需要编程工作。

  6. 必须更新安装程序和许可协议。 由于测试版本有安装程序,这通常只是文本更改,但实际更新安装脚本仍然由程序员负责。

  7. 任何对测试版的引用都需要从应用程序本身中删除。 令人尴尬的是,我们错过了其中一些。

  8. 帮助文件和手册必须完全更新并经过校对,因为它们是发布包的一部分。

    帮助文件和手册必须

  9. 如果存在无法及时修复的错误,我们至少会尝试减轻损害 - 例如,检测到正在发生这样那样的错误,并使用抱歉的错误消息中止操作。 这对感知产品稳定性有很大贡献。

显然,主要版本的困难不是编程问题,而是管理/营销问题。 其中许多事情需要程序员的关注——帮助安装人员、校对功能列表以确保其中没有废话、校对手册的技术部分、更新许可等。主要的技术差异是从从修复错误到减少错误。

We did a major release recently, so this is still pretty fresh in my mind. We make a Windows application with a GUI for which we release a binary executable, so my list is necessarily going to be substantially different from that for a web-only release.

  1. Release candidates go out to the testing team. They need at least a few days to play with it. If they find any bugs that we consider show-stoppers, release is aborted. This presumes you have a testing team. We only clear a release candidate if at least one week has passed since its build date.

  2. All automated testing has to work and pass. Automated testing is considered a supplement to the live testers.

  3. Any bugs marked as "blockers" must be resolved for the final build.

  4. Publicity material has to be ready (in our case, a web-page update and an email newsletter). Resellers are alerted that a release is coming several weeks in advance, so that they can prepare their material as well. This mostly isn't a programmer concern, but we do check marketing claims for accuracy.

  5. Licensing has to be updated to reflect whatever copy-protection we're using. Our beta versions and the release versions use different licensing models, and this change requires programming effort.

  6. The installer and license agreement have to be updated. Since the beta versions have an installer, this is usually just a text change, but it still falls to the programmers to actually update the install script.

  7. Any references to the beta version need to be removed from the application itself. We missed a few of these, embarrassingly.

  8. Help files and manuals had to be brought completely up-to-date and proofread, since they were part of the release package.

  9. If there were bugs that couldn't be fixed in time, we would at least try to mitigate the damage -- for example, detect that such-and-such bug was occurring, and abort the operation with an apologetic error message. This contributes enormously to perceived product stability.

Far and away, the difficulties of a major release were not programming problems, they were administrative/marketing problems. Many of these things required programmer attention -- helping with installers, proof-reading the feature list to make sure none of it was nonsense, proof-reading technical sections of the manual, updating licensing, etc. The main technical difference was the shift from bug-fixing to bug-mitigating.

月下凄凉 2024-07-23 07:21:54
  1. 没有明显的错误? 好吗
  2. 单元测试工作 ? 好的(有些被忽略)哈好吧,
  3. 设置好,当然。 好的
  4. 错误记录? 当然 ! :-) 我们需要这个! 修复错误!
  5. 一切都在 Cruisecontrol.net 上,很好。
  1. no visible bugs? ok
  2. unit test work? ok (some ignored) ha well ok
  3. setup ya sure. ok
  4. error logging ? off course ! :-) we need this ! to fix the bugs!
  5. all on cruisecontrol.net nice.
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文