您见过的源存储库最巧妙的用法是什么?

发布于 2024-09-08 17:32:45 字数 56 浏览 3 评论 0原文

这实际上源于我之前的问题,其中一个答案让我想知道人们如何以不同的方式使用 scm/存储库进行开发。

This actually stems from on my earlier question where one of the answers made me wonder how people are using the scm/repository in different ways for development.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

十秒萌定你 2024-09-15 17:32:45

预先测试的提交

之前(TeamCity,构建经理) :

这个概念很简单,构建系统是您的提交进入主干之间的障碍,只有在构建系统确定您的提交没有破坏内容之后,它才允许将提交引入版本控制,其他开发人员可以在版本控制中引入该提交将同步该更改并将其集成到本地工作副本中

(使用像 Git 这样的 DVCS,即源存储库):

我使用 Hudson 进行预测试提交的工作流程涉及三个独立的 Git 存储库:

  • 我的本地存储库(本地),
  • 规范/中央存储库(来源)
  • 和我的“世界可读”(在防火墙内)存储库(公共)。

对于预先测试的提交,我在世界可读的存储库上使用一个名为“pu”(潜在更新)的不断变化的分支。
在 Hudson 内部,我创建了一个作业,用于轮询世界可读的存储库(公共)以了解“pu”分支中的更改,并在推送更新时启动构建。

我从起始到起源进行更改的工作流程是:

* hack, hack, hack
* commit to local/topic
* git pup public
* Hudson polls public/pu
* Hudson runs potential-updates job
* Tests fail?
      o Yes: Rework commit, try again
      o No: Continue
* Rebase onto local/master
* Push to origin/master

使用这个预先测试的提交工作流程我可以将大部分测试需求卸载到构建系统的机器集群上,而不是在本地运行它们,这意味着我可以将大部分时间花在编写代码上,而不是等待在编码迭代之间在我自己的机器上完成的测试


(变体)私人构建< /strong> (David Gageot, Algodeal)

与上述原理相同,但构建是在与开发所使用的工作站相同的工作站上完成的,但在克隆的存储库上:

如何长期不使用 CI 服务器,并且不会因本地构建而损失越来越多的时间?

有了 git,这就是小菜一碟。
首先,我们将工作目录“git clone”到另一个文件夹。 Git非常快速地进行复制。
下次,我们不需要克隆。只需告诉 git 获取增量即可。最终结果:即时克隆。令人印象深刻。

一致性怎么样?
使用 delta 的摘要,从工作目录执行简单的“git pull”将意识到更改已推送到共享存储库上。
无事可做。再次令人印象深刻。

当然,当构建在第二个目录中运行时,我们可以继续处理代码。无需等待。

我们现在拥有一个私有构建,无需维护,无需额外安装,不依赖于 IDE,使用单个命令行运行。共享存储库中不再有损坏的构建。我们可以回收我们的 CI 服务器。

是的。你听得很好。我们刚刚构建了一个无服务器 CI。真正的 CI 服务器的每一个附加功能对我来说都是噪音。

#!/bin/bash
if [ 0 -eq `git remote -v | grep -c push` ]; then
  REMOTE_REPO=`git remote -v | sed 's/origin//'`
else
  REMOTE_REPO=`git remote -v | grep "(push)" | sed 's/origin//' | sed 's/(push)//'`
fi

if [ ! -z "$1" ]; then
  git add .
  git commit -a -m "$1"
fi

git pull

if [ ! -d ".privatebuild" ]; then
  git clone . .privatebuild
fi

cd .privatebuild
git clean -df
git pull

if [ -e "pom.xml" ]; then
  mvn clean install

  if [ $? -eq 0 ]; then
    echo "Publishing to: $REMOTE_REPO"
    git push $REMOTE_REPO master
  else
    echo "Unable to build"
    exit $?
  fi
fi

德米特里·塔什金诺夫,他有一个关于 DVCS 和 CI 的有趣问题,询问:

我不明白“我们刚刚构建了一个无服务器 CI”如何与 Martin Fowler 的状态一致:
“一旦我自己构建了一个正确同步的工作副本,我就可以最终将我的更改提交到主线,然后更新存储库。但是我的提交并没有完成我的工作。此时我们再次构建,但是这个在基于主线代码的集成机器上的时间。只有当此构建成功时,我们才能说我的更改已完成。我总是有可能错过了我的机器上的某些内容,并且存储库未正确更新。”
你会忽略它还是弯曲它?

<块引用>

@Dmitry:我不会忽略或扭曲 Martin Fowler 在他的 ContinuousIntegration 条目中描述的过程.
但您必须意识到
DVCS 将发布作为分支的正交维度添加.
David 描述的无服务器 CI 只是 Martin 详细介绍的一般 CI 流程的实现:您无需将 CI 服务器推送到运行本地 CI 的本地副本,然后将“有效”代码推送到中央存储库。

<块引用>

@VonC,但想法是不在本地运行 CI,特别是不要错过机器之间转换的某些内容。
当您使用所谓的本地 CI 时,它可能会因为它是本地的而通过所有测试,但稍后在另一台机器上会崩溃。
那么它是整数吗?我并不是在这里批评,这个问题对我来说很难,我正在努力理解。

<块引用>

@Dmitry:“这就是整数吗”?
它是一级集成,可以帮助摆脱所有基本检查(如格式问题、代码风格、基本静态分析检测等)
由于您拥有这种发布机制,因此如果您愿意,您可以将这种 CI 链接到另一个 CI 服务器。反过来,该服务器可以自动推送(如果仍然快进)到“中央”存储库。

David Gageot 不需要那个额外的级别,因为在部署架构(PC->PC)方面已经达到了目标,并且只需要那种基本的 CI 级别。
但这并不妨碍他设置更完整的系统集成服务器以进行更完整的测试。



Pre-tested commits

Before (TeamCity, build manager):

The concept is simple, the build system stands as a roadblock between your commit entering trunk and only after the build system determines that your commit doesn't break things does it allow the commit to be introduced into version control, where other developers will sync and integrate that change into their local working copies

After (using a DVCS like Git, that is a source repository):

My workflow with Hudson for pre-tested commits involves three separate Git repositories:

  • my local repo (local),
  • the canonical/central repo (origin)
  • and my "world-readable" (inside the firewall) repo (public).

For pre-tested commits, I utilize a constantly changing branch called "pu" (potential updates) on the world-readable repo.
Inside of Hudson I created a job that polls the world-readable repo (public) for changes in the "pu" branch and will kick off builds when updates are pushed.

my workflow for taking a change from inception to origin is:

* hack, hack, hack
* commit to local/topic
* git pup public
* Hudson polls public/pu
* Hudson runs potential-updates job
* Tests fail?
      o Yes: Rework commit, try again
      o No: Continue
* Rebase onto local/master
* Push to origin/master

Using this pre-tested commit workflow I can offload the majority of my testing requirements to the build system's cluster of machines instead of running them locally, meaning I can spend the majority of my time writing code instead of waiting for tests to complete on my own machine in between coding iterations.


(Variation) Private Build (David Gageot, Algodeal)

Same principle than above, but the build is done on the same workstation than the one used to develop, but on a cloned repo:

How not to use a CI server in the long term and not suffer the increasing time lost staring at the builds locally?

With git, it’s a piece of cake.
First, we ‘git clone’ the working directory to another folder. Git does the copy very quickly.
Next times, we don’t need to clone. Just tell git get the deltas. Net result: instant cloning. Impressive.

What about the consistency?
Doing a simple ‘git pull’ from the working directory will realize, using delta’s digests, that the changes where already pushed on the shared repository.
Nothing to do. Impressive again.

Of course, while the build is running in the second directory, we can keep on working on the code. No need to wait.

We now have a private build with no maintenance, no additional installation, not dependant on the IDE, ran with a single command line. No more broken build in the shared repository. We can recycle our CI server.

Yes. You’ve heard well. We’ve just built a serverless CI. Every additional feature of a real CI server is noise to me.

#!/bin/bash
if [ 0 -eq `git remote -v | grep -c push` ]; then
  REMOTE_REPO=`git remote -v | sed 's/origin//'`
else
  REMOTE_REPO=`git remote -v | grep "(push)" | sed 's/origin//' | sed 's/(push)//'`
fi

if [ ! -z "$1" ]; then
  git add .
  git commit -a -m "$1"
fi

git pull

if [ ! -d ".privatebuild" ]; then
  git clone . .privatebuild
fi

cd .privatebuild
git clean -df
git pull

if [ -e "pom.xml" ]; then
  mvn clean install

  if [ $? -eq 0 ]; then
    echo "Publishing to: $REMOTE_REPO"
    git push $REMOTE_REPO master
  else
    echo "Unable to build"
    exit $?
  fi
fi

Dmitry Tashkinov, who has an interesting question on DVCS and CI, asks:

I don't understand how "We’ve just built a serverless CI" cohere with Martin Fowler's state:
"Once I have made my own build of a properly synchronized working copy I can then finally commit my changes into the mainline, which then updates the repository. However my commit doesn't finish my work. At this point we build again, but this time on an integration machine based on the mainline code. Only when this build succeeds can we say that my changes are done. There is always a chance that I missed something on my machine and the repository wasn't properly updated."
Do you ignore or bend it?

@Dmitry: I do not ignore nor bend the process described by Martin Fowler in his ContinuousIntegration entry.
But you have to realize that DVCS adds publication as an orthogonal dimension to branching.
The serverless CI described by David is just an implementation of the general CI process detailed by Martin: instead of having a CI server, you push to a local copy where a local CI runs, then you push "valid" code to a central repo.

@VonC, but the idea was to run CI NOT locally particularly not to miss something in transition between machines.
When you use the so called local CI, then it may pass all the tests just because it is local, but break down later on another machine.
So is it integeration? I'm not criticizing here at all, the question is difficult to me and I'm trying to understand.

@Dmitry: "So is it integeration"?
It is one level of integration, which can help get rid of all the basic checks (like format issue, code style, basic static analysis detection, ...)
Since you have that publication mechanism, you can chain that kind of CI to another CI server if you want. That server, in turn, can automatically push (if this is still fast-forward) to the "central" repo.

David Gageot didn't need that extra level, being already at target in term of deployment architecture (PC->PC) and needed only that basic kind of CI level.
That doesn't prevent him to setup more complete system integration server for more complete testing.

黯淡〆 2024-09-15 17:32:45

我最喜欢的?一个未发布的工具,它使用 Bazaar(具有经过深思熟虑的显式重命名处理的 DSCM)通过将数据存储区表示为目录结构来跟踪树形结构数据。

这允许对 XML 文档进行分支和合并,并通过现代分布式源代码控制轻松实现所有优点(冲突检测和解决、审查工作流程,当然还有更改日志记录等)。将文档及其元数据的组件拆分到各自的文件中,可以防止因接近而产生错误冲突的问题,并且允许 Bazaar 团队在版本控制文件系统树中投入的所有工作能够处理其他类型的树结构数据。

My favorite? An unreleased tool which used Bazaar (a DSCM with very well-thought-out explicit rename handling) to track tree-structured data by representing the datastore as a directory structure.

This allowed an XML document to be branched and merged, with all the goodness (conflict detection and resolution, review workflow, and of course change logging and the like) made easy by modern distributed source control. Splitting components of the document and its metadata into their own files prevented the issues of allowing proximity to create false conflicts, and otherwise allowed all the work that the Bazaar team put into versioning filesystem trees to work with tree-structured data of other kinds.

巨坚强 2024-09-15 17:32:45

绝对是 Polarion Track & Wiki...

整个错误跟踪和 wiki 数据库都存储在 subversion 中,以便能够保留完整的修订历史记录。

http://www.polarion.com/products/trackwiki/features.php

Definitely Polarion Track & Wiki...

The entire bug tracking and wiki database is stored in subversion to be able to keep a complete revision history.

http://www.polarion.com/products/trackwiki/features.php

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文