如何有效地使用 C++ 的 git 存储库/子模块有很多依赖项的产品?

发布于 2024-12-08 20:37:15 字数 1230 浏览 1 评论 0原文

我对 Git 很陌生,仍在弄清楚......我想我终于理解了整个分支/合并方面。但我仍然不确定处理项目依赖关系的最佳解决方案是什么。什么是最佳实践?这肯定是一个常见问题,但我找不到一个好的教程或最佳实践来做到这一点。

假设我有一个 C++ 产品,它依赖于其他几个 C++ 库,最终构成了一个复杂的依赖关系图。类似的库:其他内部开发的 C++ 库、公共开源库、现成的闭源库

最终 C++ 产品的源代码依赖于其依赖项的输出才能进行编译。这些输出由以下部分组成:

  • 一系列 C++ 头文件(请注意,不存在 C++ 实现文件)
  • 一组已编译的二进制文件(LIB 文件、DLL 文件、EXE 文件等)

我的理解是我应该将每个库放在自己的存储库中。那么听起来 Git 的子模块就是我们要找的大部分内容。文章位于 http://chrisjean .com/2009/04/20/git-submodules-adding-using-removing-and-updating/ 特别看起来是一个很好的介绍,我可以几乎明白了。例如,我可以让我的主项目存储库引用特定的外部 Git 存储库作为子模块/依赖项。 C++ 代码可以在适当的子模块目录中“#include”头文件。主产品/存储库中包含的构建脚本可以继续递归地编译所有子模块。

现在的问题是:

通常如何缓存每个存储库的二进制文件?我们的一些依赖项需要几个小时才能编译,并且不会经常更新。通过上述方案,我可以从服务器克隆/签出一个高级项目来修复一个小错误。现在,据我了解,我还被迫克隆构成每个开源依赖项的所有数千个文件 - 我担心这可能需要一些时间(尤其是在 Windows 上)。更糟糕的是,即使几个月来没有人更改该子模块,我是否会被迫重新编译每个子模块? (似乎每台开发人员计算机上的某种本地“哈希表”方案将变更集 ID 链接到一组已编译的二进制文件会很方便......)

(几年前我工作过的一家商店使用了 Mercurial -的范围,但是所有代码 - 内部项目等都被卷入一个巨大的存储库中,并且在克隆时您必须在一个庞大的整体构建脚本中构建所有内容一个新创建的分支当我们完成修复/新功能并与上游合并后,我们删除了该特定分支的本地存储库。)

我们正在 Windows 上进行开发,但最终会扩展到其他非 Microsoft 平台 -所以便携性很重要。

I'm very new to Git and still figuring things out... I think I'm finally understanding the whole branching/merging aspects. But I'm still not sure what the best solution for handling project dependencies are. What is best practice? This has got to be a common problem and yet I can't find a good tutorial or best practice on doing this.

Suppose I have a C++ product that depends on several other C++ libraries, ultimately making up a complicated dependency graph. Libraries like: other internally developed C++ libraries, public open source libraries, off-the-shelf closed source libraries

The final C++ product's source code relies on the output of its dependencies in order to compile. These outputs are composed of:

  • A series of C++ header files (notice that the C++ implementation files are absent)
  • A set of compiled binaries (LIB files, DLL files, EXE files, etc)

My understanding is I should put each library its own repository. Then it sounds like Git's submodules are mostly what we are looking for. The write-up at http://chrisjean.com/2009/04/20/git-submodules-adding-using-removing-and-updating/ in particular seems like a good introduction and I can almost understand. For example, I could have my master project repository refer to a specific external Git repository as a submodule / dependency. C++ code can "#include" header files in the appropriate submodule directories. A build script included with the master product / repository could conceivably proceed to recursively compile all submodules.

OK now the question:

How do you typically cache binaries for each repository? Some of our dependencies take hours to compile and aren't updated on a very frequent basis. With the above scheme, I might clone / check out a high-level project from the server to fix a small bug. Now as I understand it, I'm also forced to clone all the thousands of files that make up each of these open source dependencies - I'm worried that could take some time (especially on Windows). Even worse, wouldn't I then be forced recompile each and every submodule, even if nobody has changed that submodule for months? (It seems like some kind of local "hash table" scheme on each developer computer that links a changeset ID to a set of compiled binaries would be handy...)

(A previous shop I worked at a few years ago used Mercurial - the extent of , but all code - internal projects, etc. was rolled into one single big giant repository, and you had to build everything in a big fat monolithic build script when cloning a newly-created branch from the server. When we were done with the fix / new feature and had merged back with upstream, we deleted the local repository for that particular branch.)

We're doing development on Windows, but will eventually branch out to other non-Microsoft platforms - so portability is important.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

洛阳烟雨空心柳 2024-12-15 20:37:15

通常这是一个坏主意,但是为什么不将二进制文件签入子模块以及不经常更改的子模块的编译代码呢?这样,提取将拉下 bin,当您使用更改的二进制文件编译新版本的依赖项时,您将看到二进制文件显示在 git status 输出中。

Normally this is a bad idea, but why don't you check the binaries into the submodules as well as the compiled code for submodules that don't change often? That way, the fetch will pull down the bins, and when you compile a new version of a dependency with changed binaries, you will see the binaries show up in the git status output.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文