使用 git 管理库依赖项
我有一个为多个操作系统(目前是 Linux 和 Windows,也许是 OS X)和处理器构建的项目。对于这个项目,我有一些库依赖项,这些依赖项主要是外部的,但我有一些内部依赖项,以源代码形式,我为我的上下文中可能的每个操作系统处理器组合进行编译(交叉编译)。
大多数外部库不会经常更改,只是可能在本地错误修复或在较新版本中实现的某些功能\错误修复的情况下,我认为这可能有益于该项目。内部库经常更改(1 个月一个周期),并且由我公司的另一个团队以二进制形式提供,尽管我也可以访问源代码,如果我需要修复错误,我可以这样做并生成新的二进制文件供我使用直到下一个发布周期。我现在的设置如下(仅文件系统):
-- dependencies
|
-- library_A_v1.0
|
--include
|
--lib
|
-- library_A_v1.2
|
--include
|
--lib
|
-- library_B
|
--include
|
--lib
| ...
这些库保存在服务器上,每次进行更新时,我都必须在服务器上复制任何新的二进制文件和头文件。客户端的同步是使用文件同步实用程序完成的。当然,对库的任何更新都需要向其他开发人员宣布,并且每个人都必须记住同步他们的“依赖项”文件夹。
不用说,我不太喜欢这个方案。所以我正在考虑将我的库置于版本控制(GIT)之下。构建它们,将它们打包在 tgz\zip 中并将它们推送到存储库上。每个库都有自己的 git 存储库,以便我可以轻松标记\分支已使用的版本并测试驱动新版本。每个库的数据“流”,我可以轻松获取、组合、更新。我想要以下内容:
摆脱这种保存库的正常文件系统方式;现在,每个操作系统和每个版本都保留和管理完整的单独文件夹,有时它们不同步,导致混乱
对其进行更多控制,以便能够清楚地了解我们使用的库版本的历史记录对于我们项目的哪个版本;就像我们可以通过源代码从 git(VCS) 获得
能够标记\分支我正在使用的依赖项的版本(对于每个依赖项);我有library_A的v2.0.0标签/分支,我通常将其用于我的项目,但我想测试2.1.0版本,所以我只是构建它,将其推送到服务器上的另一个分支上并调用我的构建脚本具有指向新分支的特定依赖项,
具有更简单的构建脚本 - 只需从服务器中提取源代码,提取依赖项并构建;这还允许将同一库的不同版本用于不同的处理器操作系统组合(我们经常需要这样)
我试图找到一些直接基于 git 的解决方案的替代方案,但没有取得太大成功 - 就像 git-annex 对于我想做的事情来说似乎过于复杂。
我现在面临的事实是,似乎有非常强烈的意见反对将二进制文件放在 git 或任何 VCS 下(尽管从技术上讲,我也会有头文件;我也可以将我描述的文件夹结构直接推送到 git没有 tgz\zip,但我仍然拥有库二进制文件),并且我的一些同事在共同的强烈意见的驱使下反对这种方案。我完全理解 git 跟踪内容而不是文件,但在某种程度上我也会跟踪内容,我相信这肯定会比我们目前的方案有所改进。
对于这种情况,有什么更好的解决方案呢?你知道基于 git(VCS) 的方案有什么替代方案吗?将我的方案放在 git 下会是一件可怕的事情吗:)?请分享您的意见,尤其是您处理此类情况的经验。
谢谢
I have a project which is built for multiple OSes(Linux and Windows for now, maybe OS X) and processors. To this project I have a handful of library dependencies, which are manly external but I have a couple of internal ones, in source form which I compile(cross-compile) for each OS-processor combination possible in my context.
Most of the external libraries are not changed very often, just maybe in case of a local bugfix or some feature\bugfix implemented in a newer version I think it may benefit the project. The internal libraries change quite often(1 month cycles) and are provided by another team in my company in binary form, although I also have access to the source code and if I need a bug to be fixed I can do that and generate new binaries for my usage until the next release cycle. The setup I have right now is the following(filesystem only):
-- dependencies
|
-- library_A_v1.0
|
--include
|
--lib
|
-- library_A_v1.2
|
--include
|
--lib
|
-- library_B
|
--include
|
--lib
| ...
The libraries are kept on a server and every time I make an update I have to copy any new binaries and header files on the server. The synchronization on the client side is done using a file synchronization utility. Of course any updates to the libraries need to be announced to the other developers and everyone has to remember to synchronize their "dependencies" folder.
Needless to say that I don't like very much this scheme. So I was thinking of putting my libraries under version control(GIT). Build them, pack them in a tgz\zip and push them on the repo. Each library would have its own git repository so that I could easily tag\branch already used versions and test drive new versions. A "stream" of data for each library that I could easily get, combine, update. I would like to have the following:
get rid of this normal filesystem way of keeping the libraries; right now complete separate folders are kept and managed for each OS and each version and sometimes they get out of sync resulting in a mess
more control over it, to be able to have a clear history of which versions of the libs we used for which version of our project; much like what we can obtain from git(VCS) with our source code
be able to tag\branch the versions of the dependencies I'm using(for each and every one of them); I have my v2.0.0 tag/branch for library_A from which I normally take it for my project but I would like to test drive the 2.1.0 version, so I just build it, push it on the server on a different branch and call my build script with this particular dependency pointing to the new branch
have simpler build scripts - just pull the sources from the server, pull the dependencies and build; that would allow also to use different versions of the same library for different processor-OS combinations(more than often we need that)
I tried to find some alternatives to the direct git based solution but without much success - like git-annex which kind of seems overly complicated for what I'm trying to do.
What I'm facing right now is the fact that there seems to be very strong opinion against putting binary files under git or any VCS(although technically I would have also header files; I could also push the folder structure that I described directly to git to not have the tgz\zip, but I would still have the libraries binaries) and that some of my colleagues, driven by that shared strong opinion, are against this scheme of things. I perfectly understand that git tracks content and not files, but to some extent I will be tracking also content and I believe it will definitely be an improvement over the current scheme of things we have right now.
What would be a better solution to this situation? Do you know of any alternatives to the git(VCS) based scheme of things? Would it be such a monstrous thing to have my scheme under git :)? Please share your opinions and especially your experience in handling these types of situations.
Thanks
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
另一种方法仍然可以遵循您的项目,即使用 git-annex,这将允许您跟踪头文件,同时将二进制文件存储在其他地方。
然后每个 git repo 都可以作为 子模块 添加到你的主要项目。
An alternative, which would still follwo your project, would be to use git-annex, which would allow you track header files, while keeping binaries stored elsewhere.
Then each git repo can be added as a submodule to your main project.