I highly recommend Blueprint from DevStructure. It's open-source and your use case is actually the exact reason we originally wrote the software. Our goals have somewhat changed, but it still is the perfect tool for what you are describing. In short, you can create reusable server configs - dead simple configuration management. I hope this helps!
There are several options, and sometimes a combination of these is useful:
automated installation
disk imaging
virtualization
source code control
Details on the various options:
Automated Installation Tools for automating installation and configuration of a workstation's various services, tools and config files:
Puppet has a learning curve but is powerful. You define classes of machines (development box, web server, etc.) and it then does what is necessary to install, configure, and keep the box in the proper state. You asked for one-click, but Puppet by default is zero-click, as it checks your machine periodically to make sure it is still configured as desired. It will detect when a file or mode has been changed, and fix the problem. I currently use this to maintain a handful of RedHat Linux boxes, though it's capable of handling thousands. (Does not support Windows as of 2009-05-08).
Cfengine is another one. I've seen this used successfully at a shop with 70 engineers using RedHat Linux. Its limitations were part of the reason for Puppet.
SmartFrog is another tool for configuring hosts. It does support Windows.
Shell scripts. RightScale has examples of how to configure an Amazon EC2 image using shell scripts.
Install packages. On a Unix box it's possible to do this entirely with packages, and on Windows msi may be an option. For example, RubyWorks provides you with a full Ruby on Rails stack, all by installing one package that in turn installs other packages via dependencies.
Disk Images Then of course there are also disk imaging tools for storing an image of a configured host such that it can be restored to another host. As with virtualization, this is especially nice for test boxes, since it's easy to restore things to a clean slate. Keeping things continuously up-to-date is still an issue--is it worth making new images just to propagate a configuration file change?
Virtualization is another option, for example making copies of a Xen, VirtualPC, or VMWare image to create new hosts. This is especially useful with test boxes, as no matter what mess a test creates, you can easily restore to a clean, known state. As with disk imaging tools, keeping hosts up-to-date requires more manual steps and vigilance than if an automated install/config tool is used.
Source Code Control Once you've got the necessary tools installed/configured, then doing builds should be a matter of checking out what's needed from a source code repository and building it.
Currently I use a combination of the above to automate the process as follows:
Start with a barebones OS install on a VMWare guest
Run a shell script to install Puppet and retrieve its configs from source code control
Puppet to install tools/components/configs
Check out files from source code control to build and deploy our web application
I stumbled across this question and was very suprised that no one has mentioned Vagrant yet.
As Pete TerMaat and others have mentioned, virtualization is a great way to manage and automate development environments. Vagrant basically takes the pain away from setting up these virtual boxes.
Within minutes you can have a completely fresh copy of your favourite Linux distro up and running, and provisioned exactly the same way your production server is.
No more fighting with OSX or Windows to get PHP, MySQL, etc. installed. All software lives and runs inside the virtual machine. You can even SSH in with vagrant ssh. If you make a mistake or break something, just vagrant destroy it, and vagrant up to start over fresh.
Vagrant automatically creates a synced folder to your local file system, meaning you don't need to develop within the virtual machine (ie. using Vim). Use whatever your editor of choice is.
I now create a new "Vagrant box" for almost every project I do. All my settings are saved into the project repository, so it's easy to bring on another team member. They simply have to pull the repo, and run vagrant up, and they are literally ready to go.
This also makes it much easier to handle projects that have different software requirements. Maybe you have some projects that rely on PHP 5.3, but some newer ones that run PHP 5.4. Just install the version you want for that project.
这意味着您还应该签入辅助基础设施(例如 Makefile、ant 构建文件等)以及工具的设置(例如 IDE 项目文件)。
这应该可以解决各个项目的设置麻烦。
对于基本机器设置,您可以使用标准图像。 另一种选择是使用您平台的工具来自动安装。 在 Linux 下,您可以创建一个依赖于您需要的所有包的元包。 在 Windows 下,使用 MSI 等类似的事情应该是可能的。
编辑:
理想情况下,您不需要签入帮助程序基础设施,而是签入允许构建生成帮助程序基础设施的信息。 这是 GNU 构建系统(自动工具等)或 Maven 等所采用的方法。 这甚至更加优雅,因为您可以(理论上)为任何(支持的)构建环境生成基础架构,因此您不必绑定到例如一个特定的 IDE,并且辅助基础架构中的设置(路径等)不需要重复主要项目设置。
然而,这也是一种更复杂的方法,所以如果你不能让它工作,我相信直接检查 IDE 文件之类的东西是可以接受的。
One important point is to set up your projects in source control such that you can immediately build, deploy and run after checkout.
That means you should also checkin helper infrastructure, such as Makefiles, ant buildfiles etc., and settings for the tools, such as IDE project files.
That should take care of the setup hassle for individual projects.
For the basic machine setup, you could use a standard image. Another option is to use your platform's tools to automate installation. Under Linux, you could create a meta-package that depends on all the packages you need. Under Windows, a similar thing should be possible using MSI or the like.
Edit:
Ideally, instead of checking in helper infrastructure, you check in the information that allows the build to generate the helper infrastructure. This is the approach taken by e.g. the GNU build system (autotools etc.), or by Maven. This is even more elegant, because you can (theoretically) generate infrastructure for any (supported) build environment, thus you are not bound to e.g. one specific IDE, and settings in the helper infrastructure (paths etc.) don't need to duplicate the main project settings.
However, this also a more complex approach, so if you can't get it to work, I believe checking in stuff like IDE files directly is acceptable.
I like to use Virtual PC or VMware to virtualize the development environment. This provides a standard "dev environment" that could be shared among developers. You don't have to worry about software that the user could add to their system that may conflict with your development environment. It also provides me a way to work to two projects where the development environments can't both be on one system (using two different versions of a core technology).
There's always the option of using virtual machines (see e.g. VMWare Player). Create one environment and copy it over for each new employee with minimal configuration needed.
At a prior place we had everything (and I mean EVERYTHING) in SCM (clearcase then SVN). When a new developer can in they installed ClearCase|SVN and sucked down the repository. This also handles the case when you need to update a particular lib/tool as you can just have the dev teams update their environment.
We used two repo's for this so code and tools/config lived in separate places.
I've been thinking about this myself. There are some other technologies that you could throw into the mix. Here's what I'm currently setting up:
PXE based pre-seeded installation images (Debian Squeeze). You can start up a bare-metal machine (or new virtual appliance) and select the image from the PXE boot menu. This has the major advantage of being able to install your environment on physical machines (in addition to virtual appliances).
Someone already mentioned Puppet. I use CFEngine but it's a similar deal. Essentially your configuration is documented and centralized in policy files which are continually enforced by an agent on the client.
if you don't want a rigid environment (i.e. developers may choose a combination of tool-sets) you can roll your own deb packages so new devs can type sudo apt-get install acmecorp-eclipse-env or sudo apt-get install acmecorp-intellij-env, for example.
Slightly off-topic, but if you run a Debian based environment (i.e. Ubuntu), consider installing apt-cacher (package proxy). In addition to saving bandwidth, it will make your installations much faster (since packages are cached on your local network).
If you use machines in a standard configuration, you can image the disk with a fresh perfectly configured install -- that's a very popular approach in many corporations (and not just for developers, either). If you need separately configured OS's, you can tar-bz2 all the added and changed files once a configured OS is turned into your desired setup, and just untar it as root to make your desired environment from scratch.
Try out DevScript at http://nsnihalsahu.github.io/devscript . Its one command like , devscript lamp or devscript laravel or devscript django . In around a few minutes ,depending on the speed of your internet co
if you're using a linux flavor, you've probably got a package management system: thinks .rpm for fedora/redhat, or .deb for ubuntu/debian. many of the things you describe already have packages available: svn, eclipse, etc. you could roll your own packages for company specific software, create a repository (perhaps only available on the local network) and then your setup could be reduced to a single bash script which would add the company repo to /etc/apt/sources.list (debian/ubuntu) and then call a command like,
/home/newhire$ apt-get update && apt-get install some complete package list
you could use buildbot to then automate regular builds for company packages that change often.
发布评论
评论(13)
我强烈推荐 DevStructure 的 Blueprint。 它是开源的,您的用例实际上正是我们最初编写该软件的原因。 我们的目标有所改变,但它仍然是您所描述的完美工具。 简而言之,您可以创建可重用的服务器配置 - 极其简单的配置管理。 我希望这有帮助!
https://github.com/devstruct/blueprint (蓝图@ Github)
I highly recommend Blueprint from DevStructure. It's open-source and your use case is actually the exact reason we originally wrote the software. Our goals have somewhat changed, but it still is the perfect tool for what you are describing. In short, you can create reusable server configs - dead simple configuration management. I hope this helps!
https://github.com/devstructure/blueprint (Blueprint @ Github)
有多个选项,有时这些选项的组合很有用:
有关各个选项的详细信息:
自动安装 用于自动安装和配置工作站的各种工具的工具服务、工具和配置文件:
磁盘映像当然还有磁盘映像工具用于存储已配置主机的映像,以便可以将其恢复到另一台主机。 与虚拟化一样,这对于测试盒来说特别好,因为很容易将事物恢复到干净状态。 保持事物不断更新仍然是一个问题 - 仅仅为了传播配置文件更改而制作新映像是否值得?
虚拟化是另一种选择,例如复制用于创建新主机的 Xen、VirtualPC 或 VMWare 映像。 这对于测试盒特别有用,因为无论测试造成什么混乱,您都可以轻松恢复到干净、已知的状态。 与磁盘映像工具一样,与使用自动安装/配置工具相比,使主机保持最新需要更多的手动步骤和警惕。
源代码控制一旦安装/配置了必要的工具,构建就应该是从源代码存储库中检查所需的内容并构建它。
目前,我使用上述组合来自动化该过程,如下所示:
There are several options, and sometimes a combination of these is useful:
Details on the various options:
Automated Installation Tools for automating installation and configuration of a workstation's various services, tools and config files:
Disk Images Then of course there are also disk imaging tools for storing an image of a configured host such that it can be restored to another host. As with virtualization, this is especially nice for test boxes, since it's easy to restore things to a clean slate. Keeping things continuously up-to-date is still an issue--is it worth making new images just to propagate a configuration file change?
Virtualization is another option, for example making copies of a Xen, VirtualPC, or VMWare image to create new hosts. This is especially useful with test boxes, as no matter what mess a test creates, you can easily restore to a clean, known state. As with disk imaging tools, keeping hosts up-to-date requires more manual steps and vigilance than if an automated install/config tool is used.
Source Code Control Once you've got the necessary tools installed/configured, then doing builds should be a matter of checking out what's needed from a source code repository and building it.
Currently I use a combination of the above to automate the process as follows:
我偶然发现了这个问题,非常惊讶没有人提到 Vagrant 。
正如 Pete TerMaat 和其他人所提到的,虚拟化是管理和自动化开发环境的好方法。 Vagrant 基本上消除了设置这些虚拟盒子的痛苦。
几分钟之内,您就可以启动并运行您最喜爱的 Linux 发行版的全新副本,并配置< /a> 与您的生产服务器完全相同。
不再需要与 OSX 或 Windows 来安装 PHP、MySQL 等。 所有软件都在虚拟机内生存和运行。 您甚至可以使用
vagrant ssh
进行 SSH 登录。 如果您犯了错误或破坏了某些东西,只需vagrant destroy
它,然后vagrant up
重新开始。Vagrant 会自动创建一个 同步文件夹 到您的本地文件系统,这意味着您不需要不需要在虚拟机内开发(即使用 Vim)。 使用您选择的任何编辑器。
我现在几乎为我所做的每个项目创建一个新的“Vagrant box”。 我的所有设置都保存到项目存储库中,因此很容易引入其他团队成员。 他们只需提取存储库,然后运行 vagrant up,就可以开始了。
这也使得处理具有不同软件需求的项目变得更加容易。 也许您有一些项目依赖于 PHP 5.3,但也有一些较新的项目运行 PHP 5.4。 只需安装该项目所需的版本即可。
看看吧!
I stumbled across this question and was very suprised that no one has mentioned Vagrant yet.
As Pete TerMaat and others have mentioned, virtualization is a great way to manage and automate development environments. Vagrant basically takes the pain away from setting up these virtual boxes.
Within minutes you can have a completely fresh copy of your favourite Linux distro up and running, and provisioned exactly the same way your production server is.
No more fighting with OSX or Windows to get PHP, MySQL, etc. installed. All software lives and runs inside the virtual machine. You can even SSH in with
vagrant ssh
. If you make a mistake or break something, justvagrant destroy
it, andvagrant up
to start over fresh.Vagrant automatically creates a synced folder to your local file system, meaning you don't need to develop within the virtual machine (ie. using Vim). Use whatever your editor of choice is.
I now create a new "Vagrant box" for almost every project I do. All my settings are saved into the project repository, so it's easy to bring on another team member. They simply have to pull the repo, and run
vagrant up
, and they are literally ready to go.This also makes it much easier to handle projects that have different software requirements. Maybe you have some projects that rely on PHP 5.3, but some newer ones that run PHP 5.4. Just install the version you want for that project.
Check it out!
重要的一点是在源代码管理中设置您的项目,以便您可以在签出后立即构建、部署和运行。
这意味着您还应该签入辅助基础设施(例如 Makefile、ant 构建文件等)以及工具的设置(例如 IDE 项目文件)。
这应该可以解决各个项目的设置麻烦。
对于基本机器设置,您可以使用标准图像。 另一种选择是使用您平台的工具来自动安装。 在 Linux 下,您可以创建一个依赖于您需要的所有包的元包。 在 Windows 下,使用 MSI 等类似的事情应该是可能的。
编辑:
理想情况下,您不需要签入帮助程序基础设施,而是签入允许构建生成帮助程序基础设施的信息。 这是 GNU 构建系统(自动工具等)或 Maven 等所采用的方法。 这甚至更加优雅,因为您可以(理论上)为任何(支持的)构建环境生成基础架构,因此您不必绑定到例如一个特定的 IDE,并且辅助基础架构中的设置(路径等)不需要重复主要项目设置。
然而,这也是一种更复杂的方法,所以如果你不能让它工作,我相信直接检查 IDE 文件之类的东西是可以接受的。
One important point is to set up your projects in source control such that you can immediately build, deploy and run after checkout.
That means you should also checkin helper infrastructure, such as Makefiles, ant buildfiles etc., and settings for the tools, such as IDE project files.
That should take care of the setup hassle for individual projects.
For the basic machine setup, you could use a standard image. Another option is to use your platform's tools to automate installation. Under Linux, you could create a meta-package that depends on all the packages you need. Under Windows, a similar thing should be possible using MSI or the like.
Edit:
Ideally, instead of checking in helper infrastructure, you check in the information that allows the build to generate the helper infrastructure. This is the approach taken by e.g. the GNU build system (autotools etc.), or by Maven. This is even more elegant, because you can (theoretically) generate infrastructure for any (supported) build environment, thus you are not bound to e.g. one specific IDE, and settings in the helper infrastructure (paths etc.) don't need to duplicate the main project settings.
However, this also a more complex approach, so if you can't get it to work, I believe checking in stuff like IDE files directly is acceptable.
我喜欢使用Virtual PC或VMware来虚拟化开发环境。 这提供了一个可以在开发人员之间共享的标准“开发环境”。 您不必担心用户添加到其系统中的软件可能与您的开发环境发生冲突。 它还为我提供了一种处理两个项目的方法,其中开发环境不能同时位于一个系统上(使用核心技术的两个不同版本)。
I like to use Virtual PC or VMware to virtualize the development environment. This provides a standard "dev environment" that could be shared among developers. You don't have to worry about software that the user could add to their system that may conflict with your development environment. It also provides me a way to work to two projects where the development environments can't both be on one system (using two different versions of a core technology).
始终可以选择使用虚拟机(例如,请参见VMWare Player)。 创建一个环境,并以最少的配置为每位新员工复制该环境。
There's always the option of using virtual machines (see e.g. VMWare Player). Create one environment and copy it over for each new employee with minimal configuration needed.
在之前的地方,我们拥有 SCM 中的一切(我的意思是一切)(clearcase,然后是 SVN)。 当新开发人员可以安装 ClearCase | SVN 并吸收存储库时。 当您需要更新特定的库/工具时,这也可以处理这种情况,因为您可以让开发团队更新他们的环境。
我们为此使用了两个存储库,因此代码和工具/配置位于不同的位置。
At a prior place we had everything (and I mean EVERYTHING) in SCM (clearcase then SVN). When a new developer can in they installed ClearCase|SVN and sucked down the repository. This also handles the case when you need to update a particular lib/tool as you can just have the dev teams update their environment.
We used two repo's for this so code and tools/config lived in separate places.
使用 puppet 配置您的开发和生产环境。 使用一流的自动化系统是扩展运营的唯一方法。
Use puppet to configure both your development and production environment. Using a top-notch automation system is the only way to scale your ops.
我自己也一直在思考这个问题。 您还可以混合使用一些其他技术。 以下是我当前正在设置的内容:
sudo apt-get install acmecorp-eclipse-env
例如,或sudo apt-get install acmecorp-intellij-env
。I've been thinking about this myself. There are some other technologies that you could throw into the mix. Here's what I'm currently setting up:
sudo apt-get install acmecorp-eclipse-env
orsudo apt-get install acmecorp-intellij-env
, for example.apt-cacher
(package proxy). In addition to saving bandwidth, it will make your installations much faster (since packages are cached on your local network).如果您使用 OSX 并使用 Rails。 我建议:
If you're using OSX and working with Rails. I'd suggest either:
如果您使用标准配置的计算机,则可以使用全新的完美配置安装来映像磁盘 - 这在许多公司中是一种非常流行的方法(并且不仅仅是对开发人员而言)。 如果您需要单独配置操作系统,一旦配置的操作系统变成您想要的设置,您可以 tar-bz2 所有添加和更改的文件,然后以 root 身份解压它,从头开始创建您想要的环境。
If you use machines in a standard configuration, you can image the disk with a fresh perfectly configured install -- that's a very popular approach in many corporations (and not just for developers, either). If you need separately configured OS's, you can tar-bz2 all the added and changed files once a configured OS is turned into your desired setup, and just untar it as root to make your desired environment from scratch.
在 http://nsnihalsahu.github.io/devscript 尝试 DevScript。
它的一个命令就像,
devscript lamp
或devscript laravel
或devscript django
。 大约几分钟,取决于您的互联网公司的速度Try out DevScript at http://nsnihalsahu.github.io/devscript .
Its one command like ,
devscript lamp
ordevscript laravel
ordevscript django
. In around a few minutes ,depending on the speed of your internet co如果您使用的是 Linux 风格,您可能已经有了一个包管理系统:认为 .rpm 用于 fedora/redhat,或 .deb 用于 ubuntu/debian。 您描述的许多东西已经有可用的软件包:svn、eclipse 等。您可以为公司特定软件推出自己的软件包,创建一个存储库(可能仅在本地网络上可用),然后您的设置可以减少为一个bash 脚本会将公司存储库添加到 /etc/apt/sources.list (debian/ubuntu),然后调用类似命令,
您可以使用 buildbot 然后自动为经常更改的公司软件包进行常规构建。
if you're using a linux flavor, you've probably got a package management system: thinks .rpm for fedora/redhat, or .deb for ubuntu/debian. many of the things you describe already have packages available: svn, eclipse, etc. you could roll your own packages for company specific software, create a repository (perhaps only available on the local network) and then your setup could be reduced to a single bash script which would add the company repo to /etc/apt/sources.list (debian/ubuntu) and then call a command like,
you could use buildbot to then automate regular builds for company packages that change often.