处理多平台(dev/integ/valid/prod...)开发的最佳解决方案是什么?交货流程
我的经验不是很丰富,但我参与了一些大型 Java EE 项目(使用 maven2),使用非常独特的方式来处理不同平台上的安装/交付。
1) 其中之一是使用快照进行开发,然后制作组件和主要 Web 应用程序的 Maven 版本。因此,交付内容是:
- war/ear 文件
- 列出项目
- 属性文件
- sgdb 文件
- 其他一些
团队将使用这些文件将新的应用程序版本放入不同的平台中。 我认为这个过程很严格,允许您始终轻松地保留在生产中传递的不同配置,但它并不真正灵活,该过程有点繁重,它导致我们有时会做一些肮脏的事情,例如覆盖战争类别修补回归... 这是一个电子商务网站,每月有 1000 万独立访问者,可用性高达 99.89%。
2)我看到的另一个方法是检查每个平台上的源代码,然后将快照工件安装在本地存储库中。然后应用程序服务器将使用.m2文件夹的这些快照。 没有真正的交付过程,因为要将新版本投入生产,我们只需更新组件/Web应用程序的源,进行一些 Maven 全新安装并重新启动应用程序服务器。 我认为它更灵活,但我看到了一些缺点,这种方法对我来说似乎很危险。 这个网站有一个前台,我不知道具体数字,但比第一个少得多。它还拥有一个大型后台办公室,可供 13 万名员工的公司的大多数员工使用。
我想根据网站、其向公众的展示以及所需的可用性,我们必须根据需求调整交付策略。
我不是来问哪种解决方案是最好的,而是想知道您是否看到过不同的事情,以及在哪种情况下您会使用哪种策略?
I'm not so experienced but i worked on some big Java EE projects (using maven2) with very distinct ways to handle the installation / delivery on the different platforms.
1) One of them was to use snapshots for development and then make a maven release, of components and main webapplications. Thus the delivery is:
- war/ear files
- List item
- properties files
- sgdb files
- some others
And teams will use that files to put the new application versions in the different platforms.
I think this process is strict and permits you to always keep easily the different configurations passed in production, but it's not really flexible, the process is a bit heavy and it conducted us to sometimes do some dirty things like overriding a class of a war to patch a regression...
This is an e-commerce website with 10million unique visitors per month and a 99.89% availability.
2) Another i saw is to checkout the sources on each platform and then install the snapshot artifacts in a local repository. Then the application server will use these snapshots of the .m2 folder.
There is not a real delivery process since to put a new version in production, we just have to update the sources of the components / webapps, do some maven clean install and restart the application server.
I think it's more flexible but i see some drawbacks and this approach seems dangerous for me.
This website has a frontoffice, i don't know the numbers but it's far less than the 1st one. It also has a big backoffice available for most employees of a 130 000 people company.
I guess depending on the website, its exposition to the public and the availability required, we have to adapt the delivery strategy to the needs.
I'm not here to ask which solution is the best but wonder if you have seen different things, and which strategy you would use in which case?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
如果不处理交易网站,我必须参与异构环境中各种大型(Java)项目的发布管理过程:
我看到的常见方法是:
因此,发布管理过程要考虑的各种参数是:
您是否对它们进行参数化?如果是,您何时将变量替换为其最终值(仅在启动时,还是在运行时?)
(这并不是一个详尽的列表。
根据应用程序发布的性质,必须解决其他问题)
Without dealing dealing web sites, I had to participate in release management process for various big (Java) projects in heterogeneous environment:
The common method I saw was:
So the various parameters to take into account for a release management process is:
Do you parametrize them and, if yes, when do you replace the variables by their final values (only at startup, or also during runtime?)
(and this is not an exhaustive list.
Depending on the nature of the application release, other concerns will have to be addressed)
根据具体的要求和团队结构,这个问题的答案有很大差异。
我已经为几个具有类似可用性要求的大型网站实现了流程,并且我发现一些通用原则有效:
一些额外的提示:
请参阅我的答案来自另一个属性的属性占位符位置 一种使用 spring 为每个环境加载不同属性的简单方法。
http://wiki.hudson-ci.org/display/HUDSON/ M2+Release+Plugin 如果使用此插件并确保只有 CI 服务器具有正确的凭据来执行 maven 发布,则可以确保所有发布一致地执行。
http://decodify.blogspot。 com/2010/10/how-to-build-one-click-deployment-job.html 部署版本的简单方法。尽管对于大型站点,您可能需要更复杂的东西来确保不会出现停机 - 例如一次部署到一半的集群并在两半之间翻转网络流量 - http://martinfowler.com/bliki/BlueGreenDeployment.html
http://continuousdelivery.com/ 一个很好的网站和书籍,有一些非常好的发布模式。
希望这会有所帮助 - 祝你好运。
The answer to this varies greatly depending on the exact requirements and team structures.
I've implemented processes for a few very large websites with similar availability requirements and there are some general principles I find have worked:
Some additional pointers:
See my answer property-placeholder location from another property for a simple way to load different properties per environment with spring.
http://wiki.hudson-ci.org/display/HUDSON/M2+Release+Plugin If you use this plugin and ensure that only only the CI server has the correct credentials to perform maven releases, you can ensure that all releases are performed consistently.
http://decodify.blogspot.com/2010/10/how-to-build-one-click-deployment-job.html A simple way of deploying your releases. Although for large sites you will probably need something more complicated to ensure no downtime - e.g. deploying to half the cluster at a time and flip-flopping web traffic between the two halves - http://martinfowler.com/bliki/BlueGreenDeployment.html
http://continuousdelivery.com/ A good website and book with some very good patterns for releasing.
Hope this helps - good luck.
要补充我之前的答案,您所处理的基本上是 CM-RM 问题:
换句话说,在第一个版本之后(即主要的初始开发结束),您有持续发布,这就是 CM-RM 应该管理的事情。
在你的问题中,RM 的实现可以是 1) 或 2),但我的观点是添加到该机制中:
To add to my previous answer, what you are dealing with is basically a CM-RM issue:
In other words, after the first release (i.e. the main initial development is over), you have to keep making release, and that is what CM-RM is supposed to manage.
The implementation of the RM can be either 1) or 2) in your question, but my point would be to add to that mechanism:
虽然不声称这是最佳解决方案,但这就是我的团队目前进行暂存和部署的方式。
scp
。不过,我们计划直接在服务器上构建。scp
。scp
来复制二进制文件。这种滑动策略允许我们并行开发 3 个版本。当前正在生产并通过更新服务器上演的版本 N,版本 N+1 将是即将发布并在测试服务器上上演的下一个主要版本,版本 N+2 是下一个主要版本目前正在进行开发并在开发服务器上进行。
我们做出的一些选择:
在 EAR 中嵌入配置可能会引起争议,因为传统上操作需要控制生产中使用的 DB 数据源(它指向什么服务器、连接池允许拥有多少个连接等)。但是,由于我们的开发团队中也有从事运营工作的人员,因此他们可以轻松地在代码仍在开发过程中检查其他开发人员在配置中所做的更改。
与暂存并行,我们让连续构建服务器在每次签入后执行脚本化 (ANT) 构建(最多每 5 分钟一次),并运行单元测试和一些其他完整性测试。
仍然很难说这是否是一种最佳方法,我们一直在努力改进我们的流程。
Without claiming it's a best solution, this is how my team currently does staging and deployment.
scp
from the binary build produced from the IDE. We plan to build directly on the server though.scp
of a binary build from the IDE.scp
to copy binaries.This sliding strategy allows us to develop for 3 versions in parallel. Version N that's currently in production and staged via the update server, version N+1 that will be the next major release that's about to be released and is staged on the beta server, and version N+2 that is the next-next major release for which development is currently underway and is staged on the dev server.
Some of the choices that we made:
Embedding the configuration in the EAR might be controversial, since traditionally operations needs to have control about e.g. the DB data sources being used in production (to what server it points to, how many connections a connection pool is allowed to have, etc). However, since we have persons on the development team who are also in operations, they are easily able to sanity check the changes made by other developers in the configuration while the code is still in development.
Parallel to the staging we have the continuous build server server doing a scripted (ANT) build after every check-in (with a maximum of once per 5 minutes), and runs unit tests and some other integrity tests.
It remains difficult to say whether this is a best-of-breed approach and we're constantly trying to improve our process.
我大力提倡包含所有内容的单一可部署(代码、配置、DB Delta,...)适用于所有环境,在 CI 服务器上集中构建和发布。
这背后的主要思想是代码、配置和配置。无论如何,DB Delta 都是紧密耦合的。该代码取决于配置中设置的某些属性以及数据库中存在的某些对象(表、视图等)。那么,当您一开始就可以将其一起交付时,为什么要拆分它并花时间跟踪所有内容以确保它们能够组合在一起。
另一个重要方面是最小化环境之间的差异,将故障原因减少到绝对最低限度。
更多详细信息,请参阅我关于 Parleys 的持续交付演讲:http://parleys .com/#id=2443&st=5
I am a big advocate of a single deployable containing everything (Code, Config, DB Delta, ...) for all environments, built and released centrally on the CI server.
The main idea behind this is that Code, Config & DB Delta are tightly coupled anyway. The code is dependent on certain properties being set in the config and some objects (tables, views, ...) being present in the DB. So why split this and spend your time tracking everything to make sure it fits together, when you can just ship it together in the first place.
Another big aspect is minimizing differences between environments, to reduce failure causes to the absolute minimum.
More details in my Continuous Delivery talk on Parleys: http://parleys.com/#id=2443&st=5