“漂亮” Python 的持续集成

发布于 2024-07-08 09:57:16 字数 2652 浏览 5 评论 0原文

这是一个有点..徒劳的问题,但是 BuildBot 的输出看起来并不是特别好..

例如,与..

..以及其他,BuildBot 看起来相反..过时

我目前正在使用 Hudson,但它非常以 Java 为中心(尽管与 本指南,我发现它比 BuildBot 更容易设置,并生成了更多信息)

基本上:是否有任何针对 python 的持续集成系统,可以生成大量闪亮的图形等?


更新:从这次开始,Jenkins 项目已经取代 Hudson 作为该软件包的社区版本。 原作者也已转向该项目。 Jenkins 现在是 Ubuntu/Debian、RedHat/Fedora/CentOS 等上的标准包。 下面的更新基本上还是正确的。 使用 Jenkins 执行此操作的起点是不同的。

更新:在尝试了几种替代方案之后,我想我会坚持使用 Hudson。 Integrity 很好很简单,但非常有限。 我认为 Buildbot 更适合拥有大量构建从属设备,而不是像我使用的那样在一台机器上运行所有内容它。

为 Python 项目设置 Hudson 非常简单:

  • http://hudson-ci.org/ 下载 Hudson
  • 使用 java -jar hudson.war 运行它
  • 在默认地址 http://localhost:8080 上打开 Web 界面
  • 转到“管理 Hudson”、“插件”,单击“更新”或类似
  • 安装Git插件(我必须在Hudson全局首选项中设置git路径)
  • 创建一个新项目,输入存储库,SCM轮询间隔等
  • 安装nosetests 如果尚未通过 easy_install
  • 在构建步骤中,添加 nosetests --with-xunit --verbose
  • 选中“发布 JUnit 测试结果报告”并设置“测试报告” XMLs”到 **/nosetests.xml

这就是所需要的。 您可以设置电子邮件通知,插件值得一看。 我目前用于Python项目的一些:

  • SLOCCount插件来计数代码行(并绘制图表!) - 您需要单独安装 sloccount
  • Violations 来解析 PyLint 输出(您可以设置警告阈值,绘制每个构建中的违规数量)
  • < a href="http://wiki.hudson-ci.org/display/HUDSON/Cobertura+Plugin" rel="noreferrer">Cobertura 可以解析coverage.py输出。 Nosetest 可以在运行测试时使用 nosetests --with-coverage 收集覆盖率(这会将输出写入 **/coverage.xml

This is a slightly.. vain question, but BuildBot's output isn't particularly nice to look at..

For example, compared to..

..and others, BuildBot looks rather.. archaic

I'm currently playing with Hudson, but it is very Java-centric (although with this guide, I found it easier to setup than BuildBot, and produced more info)

Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiny graphs and the likes?


Update: Since this time the Jenkins project has replaced Hudson as the community version of the package. The original authors have moved to this project as well. Jenkins is now a standard package on Ubuntu/Debian, RedHat/Fedora/CentOS, and others. The following update is still essentially correct. The starting point to do this with Jenkins is different.

Update: After trying a few alternatives, I think I'll stick with Hudson. Integrity was nice and simple, but quite limited. I think Buildbot is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it.

Setting Hudson up for a Python project was pretty simple:

  • Download Hudson from http://hudson-ci.org/
  • Run it with java -jar hudson.war
  • Open the web interface on the default address of http://localhost:8080
  • Go to Manage Hudson, Plugins, click "Update" or similar
  • Install the Git plugin (I had to set the git path in the Hudson global preferences)
  • Create a new project, enter the repository, SCM polling intervals and so on
  • Install nosetests via easy_install if it's not already
  • In the a build step, add nosetests --with-xunit --verbose
  • Check "Publish JUnit test result report" and set "Test report XMLs" to **/nosetests.xml

That's all that's required. You can setup email notifications, and the plugins are worth a look. A few I'm currently using for Python projects:

  • SLOCCount plugin to count lines of code (and graph it!) - you need to install sloccount separately
  • Violations to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build)
  • Cobertura can parse the coverage.py output. Nosetest can gather coverage while running your tests, using nosetests --with-coverage (this writes the output to **/coverage.xml)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(14

留一抹残留的笑 2024-07-15 09:57:16

您可能想查看 NoseXunit 输出插件。 您可以让它运行单元测试,并使用以下命令进行覆盖率检查:

nosetests --with-xunit --enable-cover

如果您想采用 Jenkins 路线,或者您想使用另一个支持 JUnit 测试报告的 CI 服务器,这将很有帮助。

同样,您可以使用 Jenkins 的违规插件 捕获 pylint 的输出

You might want to check out Nose and the Xunit output plugin. You can have it run your unit tests, and coverage checks with this command:

nosetests --with-xunit --enable-cover

That'll be helpful if you want to go the Jenkins route, or if you want to use another CI server that has support for JUnit test reporting.

Similarly you can capture the output of pylint using the violations plugin for Jenkins

晌融 2024-07-15 09:57:16

不知道是否可行:Bitten 是由编写 Trac 的人制作的,并与 Trac 集成。 Apache Gump 是 Apache 使用的 CI 工具。 它是用 Python 编写的。

Don't know if it would do : Bitten is made by the guys who write Trac and is integrated with Trac. Apache Gump is the CI tool used by Apache. It is written in Python.

智商已欠费 2024-07-15 09:57:16

我们使用TeamCity作为我们的CI服务器并使用nose作为我们的测试运行器,取得了巨大的成功。 用于鼻子测试的 Teamcity 插件 为您提供通过/失败计数,以及失败测试的可读显示(可以可以通过电子邮件发送)。 您甚至可以在堆栈运行时查看测试失败的详细信息。

当然,如果支持在多台机器上运行之类的事情,那么它的设置和维护比 buildbot 简单得多。

We've had great success with TeamCity as our CI server and using nose as our test runner. Teamcity plugin for nosetests gives you count pass/fail, readable display for failed test( that can be E-Mailed). You can even see details of the test failures while you stack is running.

If of course supports things like running on multiple machines, and it's much simpler to setup and maintain than buildbot.

救赎№ 2024-07-15 09:57:16

Buildbot 的瀑布页面可以相当美化。 这是一个很好的例子 http://build.chromium.org/buildbot/waterfall/waterfall

Buildbot's waterfall page can be considerably prettified. Here's a nice example http://build.chromium.org/buildbot/waterfall/waterfall

明媚殇 2024-07-15 09:57:16

我想这个线程已经很老了,但这是我对 hudson 的看法:

我决定使用 pip 并设置一个存储库(工作起来很痛苦,但看起来不错 Eggbasket),Hudson 自动上传到并进行了成功的测试。 这是我的粗略且准备好的脚本,可与 hudson 配置执行脚本一起使用,例如: /var/lib/hudson/venv/main/bin/hudson_script.py -w $WORKSPACE -p my.package -v $BUILD_NUMBER,只需输入**/coverage.xml、pylint.txt 和 notests.xml 在配置位中:

#!/var/lib/hudson/venv/main/bin/python
import os
import re
import subprocess
import logging
import optparse

logging.basicConfig(level=logging.INFO,
                    format='%(asctime)s %(levelname)s %(message)s')

#venvDir = "/var/lib/hudson/venv/main/bin/"

UPLOAD_REPO = "http://ldndev01:3442"

def call_command(command, cwd, ignore_error_code=False):
    try:
        logging.info("Running: %s" % command)
        status = subprocess.call(command, cwd=cwd, shell=True)
        if not ignore_error_code and status != 0:
            raise Exception("Last command failed")

        return status

    except:
        logging.exception("Could not run command %s" % command)
        raise

def main():
    usage = "usage: %prog [options]"
    parser = optparse.OptionParser(usage)
    parser.add_option("-w", "--workspace", dest="workspace",
                      help="workspace folder for the job")
    parser.add_option("-p", "--package", dest="package",
                      help="the package name i.e., back_office.reconciler")
    parser.add_option("-v", "--build_number", dest="build_number",
                      help="the build number, which will get put at the end of the package version")
    options, args = parser.parse_args()

    if not options.workspace or not options.package:
        raise Exception("Need both args, do --help for info")

    venvDir = options.package + "_venv/"

    #find out if venv is there
    if not os.path.exists(venvDir):
        #make it
        call_command("virtualenv %s --no-site-packages" % venvDir,
                     options.workspace)

    #install the venv/make sure its there plus install the local package
    call_command("%sbin/pip install -e ./ --extra-index %s" % (venvDir, UPLOAD_REPO),
                 options.workspace)

    #make sure pylint, nose and coverage are installed
    call_command("%sbin/pip install nose pylint coverage epydoc" % venvDir,
                 options.workspace)

    #make sure we have an __init__.py
    #this shouldn't be needed if the packages are set up correctly
    #modules = options.package.split(".")
    #if len(modules) > 1: 
    #    call_command("touch '%s/__init__.py'" % modules[0], 
    #                 options.workspace)
    #do the nosetests
    test_status = call_command("%sbin/nosetests %s --with-xunit --with-coverage --cover-package %s --cover-erase" % (venvDir,
                                                                                     options.package.replace(".", "/"),
                                                                                     options.package),
                 options.workspace, True)
    #produce coverage report -i for ignore weird missing file errors
    call_command("%sbin/coverage xml -i" % venvDir,
                 options.workspace)
    #move it so that the code coverage plugin can find it
    call_command("mv coverage.xml %s" % (options.package.replace(".", "/")),
                 options.workspace)
    #run pylint
    call_command("%sbin/pylint --rcfile ~/pylint.rc -f parseable %s > pylint.txt" % (venvDir, 
                                                                                     options.package),
                 options.workspace, True)

    #remove old dists so we only have the newest at the end
    call_command("rm -rfv %s" % (options.workspace + "/dist"),
                 options.workspace)

    #if the build passes upload the result to the egg_basket
    if test_status == 0:
        logging.info("Success - uploading egg")
        upload_bit = "upload -r %s/upload" % UPLOAD_REPO
    else:
        logging.info("Failure - not uploading egg")
        upload_bit = ""

    #create egg
    call_command("%sbin/python setup.py egg_info --tag-build=.0.%s --tag-svn-revision --tag-date sdist %s" % (venvDir,
                                                                                                              options.build_number,
                                                                                                              upload_bit),
                 options.workspace)

    call_command("%sbin/epydoc --html --graph all %s" % (venvDir, options.package),
                 options.workspace)

    logging.info("Complete")

if __name__ == "__main__":
    main()

当涉及到部署内容时,您可以执行以下操作:

pip -E /location/of/my/venv/ install my_package==X.Y.Z --extra-index http://my_repo

然后人们可以使用以下方式开发内容:

pip -E /location/of/my/venv/ install -e ./ --extra-index http://my_repo

这个内容假设每个包都有一个存储库结构setup.py 和依赖项都已设置完毕,然后您只需检查主干并在其上运行这些东西即可。

我希望这可以帮助别人。

------更新---------

我添加了 epydoc,它非常适合 hudson。 只需使用 html 文件夹将 javadoc 添加到您的配置中即可。

请注意,pip 目前无法正确支持 -E 标志,因此您必须单独创建 venv

I guess this thread is quite old but here is my take on it with hudson:

I decided to go with pip and set up a repo (the painful to get working but nice looking eggbasket), which hudson auto uploads to with a successful tests. Here is my rough and ready script for use with a hudson config execute script like: /var/lib/hudson/venv/main/bin/hudson_script.py -w $WORKSPACE -p my.package -v $BUILD_NUMBER, just put in **/coverage.xml, pylint.txt and nosetests.xml in the config bits:

#!/var/lib/hudson/venv/main/bin/python
import os
import re
import subprocess
import logging
import optparse

logging.basicConfig(level=logging.INFO,
                    format='%(asctime)s %(levelname)s %(message)s')

#venvDir = "/var/lib/hudson/venv/main/bin/"

UPLOAD_REPO = "http://ldndev01:3442"

def call_command(command, cwd, ignore_error_code=False):
    try:
        logging.info("Running: %s" % command)
        status = subprocess.call(command, cwd=cwd, shell=True)
        if not ignore_error_code and status != 0:
            raise Exception("Last command failed")

        return status

    except:
        logging.exception("Could not run command %s" % command)
        raise

def main():
    usage = "usage: %prog [options]"
    parser = optparse.OptionParser(usage)
    parser.add_option("-w", "--workspace", dest="workspace",
                      help="workspace folder for the job")
    parser.add_option("-p", "--package", dest="package",
                      help="the package name i.e., back_office.reconciler")
    parser.add_option("-v", "--build_number", dest="build_number",
                      help="the build number, which will get put at the end of the package version")
    options, args = parser.parse_args()

    if not options.workspace or not options.package:
        raise Exception("Need both args, do --help for info")

    venvDir = options.package + "_venv/"

    #find out if venv is there
    if not os.path.exists(venvDir):
        #make it
        call_command("virtualenv %s --no-site-packages" % venvDir,
                     options.workspace)

    #install the venv/make sure its there plus install the local package
    call_command("%sbin/pip install -e ./ --extra-index %s" % (venvDir, UPLOAD_REPO),
                 options.workspace)

    #make sure pylint, nose and coverage are installed
    call_command("%sbin/pip install nose pylint coverage epydoc" % venvDir,
                 options.workspace)

    #make sure we have an __init__.py
    #this shouldn't be needed if the packages are set up correctly
    #modules = options.package.split(".")
    #if len(modules) > 1: 
    #    call_command("touch '%s/__init__.py'" % modules[0], 
    #                 options.workspace)
    #do the nosetests
    test_status = call_command("%sbin/nosetests %s --with-xunit --with-coverage --cover-package %s --cover-erase" % (venvDir,
                                                                                     options.package.replace(".", "/"),
                                                                                     options.package),
                 options.workspace, True)
    #produce coverage report -i for ignore weird missing file errors
    call_command("%sbin/coverage xml -i" % venvDir,
                 options.workspace)
    #move it so that the code coverage plugin can find it
    call_command("mv coverage.xml %s" % (options.package.replace(".", "/")),
                 options.workspace)
    #run pylint
    call_command("%sbin/pylint --rcfile ~/pylint.rc -f parseable %s > pylint.txt" % (venvDir, 
                                                                                     options.package),
                 options.workspace, True)

    #remove old dists so we only have the newest at the end
    call_command("rm -rfv %s" % (options.workspace + "/dist"),
                 options.workspace)

    #if the build passes upload the result to the egg_basket
    if test_status == 0:
        logging.info("Success - uploading egg")
        upload_bit = "upload -r %s/upload" % UPLOAD_REPO
    else:
        logging.info("Failure - not uploading egg")
        upload_bit = ""

    #create egg
    call_command("%sbin/python setup.py egg_info --tag-build=.0.%s --tag-svn-revision --tag-date sdist %s" % (venvDir,
                                                                                                              options.build_number,
                                                                                                              upload_bit),
                 options.workspace)

    call_command("%sbin/epydoc --html --graph all %s" % (venvDir, options.package),
                 options.workspace)

    logging.info("Complete")

if __name__ == "__main__":
    main()

When it comes to deploying stuff you can do something like:

pip -E /location/of/my/venv/ install my_package==X.Y.Z --extra-index http://my_repo

And then people can develop stuff using:

pip -E /location/of/my/venv/ install -e ./ --extra-index http://my_repo

This stuff assumes you have a repo structure per package with a setup.py and dependencies all set up then you can just check out the trunk and run this stuff on it.

I hope this helps someone out.

------update---------

I've added epydoc which fits in really nicely with hudson. Just add javadoc to your config with the html folder

Note that pip doesn't support the -E flag properly these days, so you have to create your venv separately

︶ ̄淡然 2024-07-15 09:57:16

Atlassian 的 Bamboo 也绝对值得一试。 整个 Atlassian 套件(JIRA、Confluence、FishEye 等)非常不错。

Atlassian's Bamboo is also definitely worth checking out. The entire Atlassian suite (JIRA, Confluence, FishEye, etc) is pretty sweet.

审判长 2024-07-15 09:57:16

另一个: Shining Panda 是一个 Python 托管工具

another one : Shining Panda is a hosted tool for python

眉黛浅 2024-07-15 09:57:16

如果您正在考虑托管 CI 解决方案并进行开源,您也应该研究 Travis CI - 它与 GitHub 的集成非常好。 虽然它最初是一个 Ruby 工具,但不久前他们添加了 Python 支持

If you're considering hosted CI solution, and doing open source, you should look into Travis CI as well - it has very nice integration with GitHub. While it started as a Ruby tool, they have added Python support a while ago.

活泼老夫 2024-07-15 09:57:16

信号是另一种选择。 您可以了解更多信息并观看视频

Signal is another option. You can know more about it and watch a video also here.

伤痕我心 2024-07-15 09:57:16

我会考虑 CircleCi - 它有很好的 Python 支持,并且输出非常漂亮。

I would consider CircleCi - it has great Python support, and very pretty output.

神经大条 2024-07-15 09:57:16

Continentum 的 binstar 现在能够从 github 触发构建,并且可以针对 linux、osx 进行编译和窗户(32 / 64)。 巧妙的是,它确实允许您将分发和持续集成紧密结合起来。 这就是跨越整合的 t 部分并点缀 I 部分。 该网站、工作流程和工具都非常完善,并且 AFAIK conda 是分发复杂 Python 模块的最强大且最 Python 的方式,您需要在其中包装和分发 C/C++/Fotran 库。

continuum's binstar now is able to trigger builds from github and can compile for linux, osx and windows ( 32 / 64 ). the neat thing is that it really allows you to closely couple distribution and continuous integration. That's crossing the t's and dotting the I's of Integration. The site, workflow and tools are really polished and AFAIK conda is the most robust and pythonic way to distributing complex python modules, where you need to wrap and distribute C/C++/Fotran libraries.

铁憨憨 2024-07-15 09:57:16

我们已经用过很多次了。 它很漂亮,并且与 Trac 集成得很好,但如果您有任何非标准工作流程,那么定制起来会很麻烦。 此外,它的插件数量不如更流行的工具那么多。 目前我们正在评估哈德森作为替代者。

We have used bitten quite a bit. It is pretty and integrates well with Trac, but it is a pain in the butt to customize if you have any nonstandard workflow. Also there just aren't as many plugins as there are for the more popular tools. Currently we are evaluating Hudson as a replacement.

陌伤浅笑 2024-07-15 09:57:16

检查 rultor.com。 正如本文所解释的,它在每个构建中都使用 Docker 。 因此,您可以在 Docker 映像中配置您喜欢的任何内容,包括 Python。

Check rultor.com. As this article explains, it uses Docker for every build. Thanks to that, you can configure whatever you like inside your Docker image, including Python.

浅浅淡淡 2024-07-15 09:57:16

小小的免责声明,我实际上必须为客户构建一个这样的解决方案,该客户想要一种在 git Push 上自动测试和部署任何代码并通过 git Notes 管理问题票证的方法。 这也促成了我在 AIMS 项目 上的工作。

人们可以轻松地设置一个具有构建用户的裸节点系统,并通过 make(1)expect(1)crontab(1) 管理其构建/systemd.unit(5)incrontab(1)。 人们甚至可以更进一步,使用 ansible 和 celery 通过 gridfs/nfs 文件存储进行分布式构建。

尽管如此,除了 Graybeard UNIX 人员或原理级工程师/架构师之外,我不希望任何人真正走得这么远。 这只是一个很好的想法和潜在的学习经验,因为构建服务器只不过是一种以自动化方式任意执行脚本任务的方法。

Little disclaimer, I've actually had to build a solution like this for a client that wanted a way to automatically test and deploy any code on a git push plus manage the issue tickets via git notes. This also lead to my work on the AIMS project.

One could easily just setup a bare node system that has a build user and manage their build through make(1), expect(1), crontab(1)/systemd.unit(5), and incrontab(1). One could even go a step further and use ansible and celery for distributed builds with a gridfs/nfs file store.

Although, I would not expect anyone other than a Graybeard UNIX guy or Principle level engineer/architect to actually go this far. Just makes for a nice idea and potential learning experience since a build server is nothing more than a way to arbitrarily execute scripted tasks in an automated fashion.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文