处理多平台(dev/integ/valid/prod...)开发的最佳解决方案是什么?交货流程

发布于 2024-10-09 17:38:01 字数 739 浏览 4 评论 0原文

我的经验不是很丰富,但我参与了一些大型 Java EE 项目(使用 maven2),使用非常独特的方式来处理不同平台上的安装/交付。

1) 其中之一是使用快照进行开发,然后制作组件和主要 Web 应用程序的 Maven 版本。因此,交付内容是:

  • war/ear 文件
  • 列出项目
  • 属性文件
  • sgdb 文件
  • 其他一些

团队将使用这些文件将新的应用程序版本放入不同的平台中。 我认为这个过程很严格,允许您始终轻松地保留在生产中传递的不同配置,但它并不真正灵活,该过程有点繁重,它导致我们有时会做一些肮脏的事情,例如覆盖战争类别修补回归... 这是一个电子商务网站,每月有 1000 万独立访问者,可用性高达 99.89%。

2)我看到的另一个方法是检查每个平台上的源代码,然后将快照工件安装在本地存储库中。然后应用程序服务器将使用.m2文件夹的这些快照。 没有真正的交付过程,因为要将新版本投入生产,我们只需更新组件/Web应用程序的源,进行一些 Maven 全新安装并重新启动应用程序服务器。 我认为它更灵活,但我看到了一些缺点,这种方法对我来说似乎很危险。 这个网站有一个前台,我不知道具体数字,但比第一个少得多。它还拥有一个大型后台办公室,可供 13 万名员工的公司的大多数员工使用。

我想根据网站、其向公众的展示以及所需的可用性,我们必须根据需求调整交付策略。

我不是来问哪种解决方案是最好的,而是想知道您是否看到过不同的事情,以及在哪种情况下您会使用哪种策略?

I'm not so experienced but i worked on some big Java EE projects (using maven2) with very distinct ways to handle the installation / delivery on the different platforms.

1) One of them was to use snapshots for development and then make a maven release, of components and main webapplications. Thus the delivery is:

  • war/ear files
  • List item
  • properties files
  • sgdb files
  • some others

And teams will use that files to put the new application versions in the different platforms.
I think this process is strict and permits you to always keep easily the different configurations passed in production, but it's not really flexible, the process is a bit heavy and it conducted us to sometimes do some dirty things like overriding a class of a war to patch a regression...
This is an e-commerce website with 10million unique visitors per month and a 99.89% availability.

2) Another i saw is to checkout the sources on each platform and then install the snapshot artifacts in a local repository. Then the application server will use these snapshots of the .m2 folder.
There is not a real delivery process since to put a new version in production, we just have to update the sources of the components / webapps, do some maven clean install and restart the application server.
I think it's more flexible but i see some drawbacks and this approach seems dangerous for me.
This website has a frontoffice, i don't know the numbers but it's far less than the 1st one. It also has a big backoffice available for most employees of a 130 000 people company.

I guess depending on the website, its exposition to the public and the availability required, we have to adapt the delivery strategy to the needs.

I'm not here to ask which solution is the best but wonder if you have seen different things, and which strategy you would use in which case?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

瑕疵 2024-10-16 17:38:01

如果不处理交易网站,我必须参与异构环境中各种大型(Java)项目的发布管理过程:

  • 在“PC”上开发,在我们的例子中意味着Windows——遗憾的是现在仍然是Windows Xp——(以及单元测试)
  • 在Linux上进行持续集成和系统测试(因为它们的设置成本更低)
  • 在Solaris上进行预生产和生产(例如Sun Fire)

我看到的常见方法是:

  • 二进制依赖(每个项目都使用其他项目生成的二进制文件,不是它们的来源)
  • 无需重新编译集成测试(PC 上生成的 jar 直接在 Linux 农场中使用)
  • 预生产时完全重新编译(意味着存储在 Maven 存储库中的二进制文件),至少要确保一切< /em> 使用相同的 JDK 和销售选项重新编译。
  • 生产系统上没有 VCS(版本控制系统,如 SVN、Perforce、Git、Mercurial...):一切都是从预生产到 rsynch 进行部署的。

因此,发布管理过程要考虑的各种参数是:

  • 当您开发项目时,您是否直接依赖于其他项目的源代码或二进制文件?
  • 您在哪里存储设置值?
    您是否对它们进行参数化?如果是,您何时将变量替换为其最终值(仅在启动时,还是在运行时?)
  • 您是否在最终(预生产)系统上重新编译所有内容?
  • 如何在生产系统上访问/复制/部署?
  • 如何停止/重新启动/修补您的应用程序?

(这并不是一个详尽的列表。
根据应用程序发布的性质,必须解决其他问题)

Without dealing dealing web sites, I had to participate in release management process for various big (Java) projects in heterogeneous environment:

  • development on "PC", meaning in our case Windows -- sadly still Windows Xp for now -- (and unit testing)
  • continuous integration and system testing on linux (because they are cheaper to setup)
  • pre-production and production on Solaris (Sun Fire for instance)

The common method I saw was:

  • binary dependency (each project uses the binaries produced by the other project, not their sources)
  • no recompilation for integration testing (the jars produced on PC are directly used on linux farms)
  • full recompilation on pre-production (meaning the binary stored on the Maven repo), at least to make sure that everything is recompiled with the same JDK and the sale options.
  • no VCS (Version Control System, like SVN, Perforce, Git, Mercurial, ...) on a production system: everything is deployed from pre-prod through rsynch.

So the various parameters to take into account for a release management process is:

  • when you develop your project, do you depend directly on the sources or the binaries of the other projects?
  • where do you store your setting values?
    Do you parametrize them and, if yes, when do you replace the variables by their final values (only at startup, or also during runtime?)
  • do you recompile everything on the final (pre-production) system?
  • How do you access/copy/deploy on your production system?
  • How do you stop/restart/patch your applications?

(and this is not an exhaustive list.
Depending on the nature of the application release, other concerns will have to be addressed)

甜心 2024-10-16 17:38:01

根据具体的要求和团队结构,这个问题的答案有很大差异。

我已经为几个具有类似可用性要求的大型网站实现了流程,并且我发现一些通用原则有效:

  • 外部化任何配置,以便相同的构建工件可以在您的所有环境中运行。然后只为每个版本构建一次工件 - 为不同的环境重新构建既耗时又危险,例如它与您测试的应用程序不同
  • 集中构建工件的位置。 - 例如,所有生产战争都必须打包在 CI 服务器上(使用 hudson 上的 Maven 发布插件对我们来说效果很好)。
  • 所有发布更改必须可追溯(版本控制、审核表等),以确保稳定性并允许快速回滚和恢复。诊断。这并不一定意味着重量级流程 - 请参阅下一点
  • 自动化一切,构建、测试、发布和回滚。如果该流程可靠、可自动化且快速,则同一流程可用于从快速修复到紧急更改的所有事务。我们使用相同的流程进行 5 分钟的快速紧急修复和主要发布,因为它是自动化且快速的。

一些额外的提示:

请参阅我的答案来自另一个属性的属性占位符位置 一种使用 spring 为每个环境加载不同属性的简单方法。

http://wiki.hudson-ci.org/display/HUDSON/ M2+Release+Plugin 如果使用此插件并确保只有 CI 服务器具有正确的凭据来执行 maven 发布,则可以确保所有发布一致地执行。

http://decodify.blogspot。 com/2010/10/how-to-build-one-click-deployment-job.html 部署版本的简单方法。尽管对于大型站点,您可能需要更复杂的东西来确保不会出现停机 - 例如一次部署到一半的集群并在两半之间翻转网络流量 - http://martinfowler.com/bliki/BlueGreenDeployment.html

http://continuousdelivery.com/ 一个很好的网站和书籍,有一些非常好的发布模式。

希望这会有所帮助 - 祝你好运。

The answer to this varies greatly depending on the exact requirements and team structures.

I've implemented processes for a few very large websites with similar availability requirements and there are some general principles I find have worked:

  • Externalise any config such that the same built artifact can run on all your environments. Then only build the artifacts once for each release - Rebuilding for different environments is time consuming and risky e.g. it not the same app that you tested
  • Centralise the place where the artifacts get built. - e.g. all wars for production must be packaged on the CI server (using the maven release plugin on hudson works well for us).
  • All changes for release must be traceable (version control, audit table etc.), to ensure stability and allow for quick rollbacks & diagnostics. This doesn't have to mean a heavyweight process - see the next point
  • Automate everything, building, testing, releasing, and rollbacks. If the process is dependable, automatable and quick the the same process can be used for everything from quick fixes to emergency changes. We use the same process for a quick 5 minute emergency fix and for a major release, because it is automated and quick.

Some additional pointers:

See my answer property-placeholder location from another property for a simple way to load different properties per environment with spring.

http://wiki.hudson-ci.org/display/HUDSON/M2+Release+Plugin If you use this plugin and ensure that only only the CI server has the correct credentials to perform maven releases, you can ensure that all releases are performed consistently.

http://decodify.blogspot.com/2010/10/how-to-build-one-click-deployment-job.html A simple way of deploying your releases. Although for large sites you will probably need something more complicated to ensure no downtime - e.g. deploying to half the cluster at a time and flip-flopping web traffic between the two halves - http://martinfowler.com/bliki/BlueGreenDeployment.html

http://continuousdelivery.com/ A good website and book with some very good patterns for releasing.

Hope this helps - good luck.

妥活 2024-10-16 17:38:01

要补充我之前的答案,您所处理的基本上是 CM-RM 问题:

  • CM(变更管理)
  • RM(发布管理)

换句话说,在第一个版本之后(即主要的初始开发结束),您有持续发布,这就是 CM-RM 应该管理的事情。

在你的问题中,RM 的实现可以是 1) 或 2),但我的观点是添加到该机制中:

  • 适当的 CM 以便跟踪任何变更请求,并在致力于任何开发之前评估其影响
  • 适当的 RM为了能够实现“发布”测试(系统、性能、回归、部署测试),然后计划、安排、执行并监控发布本身。

To add to my previous answer, what you are dealing with is basically a CM-RM issue:

  • CM (Change Management)
  • RM (Release Management)

In other words, after the first release (i.e. the main initial development is over), you have to keep making release, and that is what CM-RM is supposed to manage.

The implementation of the RM can be either 1) or 2) in your question, but my point would be to add to that mechanism:

  • proper CM in order to track any change request, and evaluate their impact before committing to any development
  • proper RM in order to be able to realize the "release" tests (system, performance, regression, deployment tests), and then to planify, schedule, perform and then monitor the release itself.
旧梦荧光笔 2024-10-16 17:38:01

虽然不声称这是最佳解决方案,但这就是我的团队目前进行暂存和部署的方式。

  • 开发人员最初在本地计算机上进行开发,操作系统可以自由选择,但我们强烈鼓励使用与生产中使用的相同的 JVM。
  • 我们有一个DEV服务器,代码的快照经常被推送到其中。这只是 IDE 生成的二进制版本的 scp。不过,我们计划直接在服务器上构建。
  • DEV服务器用于利益相关者随着开发不断查看。就其本质而言,它是不稳定的。这对于该服务器的所有用户来说都是众所周知的。
  • 如果代码足够好,它就会被分支并推送到 BETA 服务器。同样,这是来自 IDE 的二进制构建的 scp
  • 测试和一般 QA 在此BETA 服务器上进行。
  • 同时,如果当前正在生产的软件需要进行任何紧急更改,我们还有第三个临时服务器,称为更新服务器。
  • UPDATE 服务器最初仅用于进行非常小的修复。这里我们也使用 scp 来复制二进制文件。
  • UPDATE上进行所有测试后,我们将构建从UPDATE复制到LIVE。没有任何东西直接到达实时服务器,它总是通过更新服务器。
  • 当所有测试在 BETA 上完成后,测试版本将从 Beta 服务器复制到 UPDATE 服务器,并执行最后一轮健全性测试。由于这是在测试版服务器上测试的确切版本,因此在此阶段不太可能发现问题,但我们坚持这样的规则:部署到实时服务器的所有内容都应通过更新服务器,并且更新上的所有内容都应通过更新服务器服务器应该在移动之前进行测试。

这种滑动策略允许我们并行开发 3 个版本。当前正在生产并通过更新服务器上演的版本 N,版本 N+1 将是即将发布并在测试服务器上上演的下一个主要版本,版本 N+2 是下一个主要版本目前正在进行开发并在开发服务器上进行。

我们做出的一些选择:

  • 完整的应用程序(EAR)通常依赖于其他项目的工件。我们选择包含其他项目的二进制文件,而不是从源代码构建整个项目。这简化了构建,并更好地确保经过测试的应用程序与其所有依赖项的正确版本捆绑在一起。代价是必须手动将此类依赖项的修复分发到依赖它的所有应用程序。
  • 每个阶段的配置都嵌入在 EAR 中。我们当前使用命名约定,并且脚本将每个配置文件的正确版本复制到正确的位置。当前正在考虑参数化每个配置文件的路径,例如通过在根配置文件中使用单个{stage}占位符。我们将配置存储在 EAR 中的原因是因为开发人员是引入和依赖配置的人,因此他们应该负责维护配置(添加新条目、删除未使用的条目、调整现有条目等)。
  • 我们为部署团队使用 DevOps 策略。它由一个纯粹的开发人员、两个既是开发人员又是运营人员以及两个纯粹的运营人员组成。

在 EAR 中嵌入配置可能会引起争议,因为传统上操作需要控制生产中使用的 DB 数据源(它指向什么服务器、连接池允许拥有多少个连接等)。但是,由于我们的开发团队中也有从事运营工作的人员,因此他们可以轻松地在代码仍在开发过程中检查其他开发人员在配置中所做的更改。

与暂存并行,我们让连续构建服务器在每次签入后执行脚本化 (ANT) 构建(最多每 5 分钟一次),并运行单元测试和一些其他完整性测试。

仍然很难说这是否是一种最佳方法,我们一直在努力改进我们的流程。

Without claiming it's a best solution, this is how my team currently does staging and deployment.

  • Developers initially develop at their local machine, the OS is free to choose, but we strongly encourage using the same JVM as will be used in production.
  • We have a DEV server where frequently snapshots of the code is being pushed to. This is simply a scp from the binary build produced from the IDE. We plan to build directly on the server though.
  • The DEV server is used for stakeholders to continuously peek along with development. By its very nature it's unstable. This is well known with all users of this server.
  • If the code is good enough, it's branched and pushed to a BETA server. Again, this is a scp of a binary build from the IDE.
  • Testing and general QA takes place on this BETA server.
  • Mean while, if any emergency changes should be necessary for the software currently in production, we have a third staging server called the UPDATE server.
  • The UPDATE server is initially only used to stage very small fixes. Here too we use scp to copy binaries.
  • After all testing is conducted on UPDATE, we copy the build from UPDATE to LIVE. Nothing ever goes to the live servers directly, it always goes via the update server.
  • When all testing is finalized on BETA, the tested build is copied from the beta server to the UPDATE server and a final round of sanity testing is performed. Since this is the exact build that was tested on the beta server, it is very unlikely that problems are found in this stage, but we uphold the rule that everything deployed to the live server should go via the update server and that everything on the update server should be tested before moving it on.

This sliding strategy allows us to develop for 3 versions in parallel. Version N that's currently in production and staged via the update server, version N+1 that will be the next major release that's about to be released and is staged on the beta server, and version N+2 that is the next-next major release for which development is currently underway and is staged on the dev server.

Some of the choices that we made:

  • A full application (an EAR) typically depends on artifacts from other projects. We choose to include the binaries of those other projects instead of building the whole thing from source. This simplifies building and gives greater assurance that a tested application is bundled with exactly the right versions of all its dependencies. The cost is that a fix in such a dependency has to be manually distributed to all applications that depend on it.
  • Configuration for every staging is embedded in the EAR. We currently use a naming convention and a script copies the right version of each configuration file to the right location. Parameterizing the path for each configuration file, e.g. by using a single {stage} placeholder in a root config file is currently being considered. The reason we store the config in the EAR, is because the developers are the ones who introduce and depend on configuration, so they should be the ones responsible for maintaining it (adding new entries, removing unused one, tweaking existing ones, etc).
  • We use a DevOps strategy for a deployment team. It consists of a person who is purely a developer, two persons who are both developer and operations and two persons who are purely operations.

Embedding the configuration in the EAR might be controversial, since traditionally operations needs to have control about e.g. the DB data sources being used in production (to what server it points to, how many connections a connection pool is allowed to have, etc). However, since we have persons on the development team who are also in operations, they are easily able to sanity check the changes made by other developers in the configuration while the code is still in development.

Parallel to the staging we have the continuous build server server doing a scripted (ANT) build after every check-in (with a maximum of once per 5 minutes), and runs unit tests and some other integrity tests.

It remains difficult to say whether this is a best-of-breed approach and we're constantly trying to improve our process.

满意归宿 2024-10-16 17:38:01

我大力提倡包含所有内容的单一可部署(代码、配置、DB Delta,...)适用于所有环境,在 CI 服务器上集中构建和发布。

这背后的主要思想是代码、配置和配置。无论如何,DB Delta 都是紧密耦合的。该代码取决于配置中设置的某些属性以及数据库中存在的某些对象(表、视图等)。那么,当您一开始就可以将其一起交付时,为什么要拆分它并花时间跟踪所有内容以确保它们能够组合在一起。

另一个重要方面是最小化环境之间的差异,将故障原因减少到绝对最低限度。

更多详细信息,请参阅我关于 Parleys 的持续交付演讲:http://parleys .com/#id=2443&st=5

I am a big advocate of a single deployable containing everything (Code, Config, DB Delta, ...) for all environments, built and released centrally on the CI server.

The main idea behind this is that Code, Config & DB Delta are tightly coupled anyway. The code is dependent on certain properties being set in the config and some objects (tables, views, ...) being present in the DB. So why split this and spend your time tracking everything to make sure it fits together, when you can just ship it together in the first place.

Another big aspect is minimizing differences between environments, to reduce failure causes to the absolute minimum.

More details in my Continuous Delivery talk on Parleys: http://parleys.com/#id=2443&st=5

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文