删除 SVN 历史记录,但不破坏工作副本

发布于 2025-01-16 19:53:18 字数 746 浏览 7 评论 0原文

多年来,我们的 SVN 存储库的本地副本已增长到 100Gb 左右。 由于其中大约 1/2 位于 .svn 文件夹中,我推测其中很大一部分空间位于历史记录中。 虽然历史记录很有用,但大多数时候并不需要。

我现在服务器上有空间来“冻结”存储库的副本,从而保留到目前为止的历史记录,然后从 HEAD 制作存储库的新副本以重新开始。

通过将冻结的存储库放置在新的 url 中,我希望将缩小的存储库放回到同一 URL 中,而不必绕过所有用户来获得新的结账。我希望更新能够将缩小内容拉入现有的工作副本中。 (从而避免丢失未承诺工作的可能性)。 创建要冻结的存储库的副本后,我尝试仅转储 HEAD:

svnadmin dump myRepo -r HEAD > my.dump

然后删除现有的存储库并加载仅头部转储:

svnadmin load myRepo --file my.dump

该存储库缩小到大约 60Gb 使用的磁盘空间 - 我还借此机会删除了一些我们以后不需要的东西,但留在冻结的回购中“以防万一”。

当然,新的存储库的修订版为 1,而现在冻结的存储库的工作副本的修订版为(大约)9000。 结果,我“只是”进行更新的计划因错误而失败。

环顾四周,解决这个问题的正常方法似乎是创建 9000 个奇数空提交以使数字匹配,然后它应该可以正常工作。这似乎有点困难。 那行得通吗?

鉴于我试图丢失历史记录以节省空间,它将创建大量我不需要的历史记录。空提交很小吗?

是否有解决办法,或者最好的办法就是接受对所有机器进行重新结帐的需要?

Over the years our SVN repo has grown to around 100Gb in the local copies.
As roughly 1/2 of this is in the .svn folder I surmise a chunk of the space is in the history.
Whilst the history is useful its not required most of the time.

I have space on the server to make a copy of the repo 'frozen' at now, thus keeping the history upto now, and then make a fresh copy of the repo from the HEAD to start again.

By placing the frozen repo at a new url I hoped to put the minified repo back at the same url and not have to go around all the users to get a fresh checkout. I was hoping that an update would just pull the minification into the existing working copies. (and also thus avoid potential for losing uncommitted work).
After creating a copy of the repo to be frozen I have tried a dump of only HEAD:

svnadmin dump myRepo -r HEAD > my.dump

then deleting the existing repo and loading the head only dump in:

svnadmin load myRepo --file my.dump

The repo shrinks to around 60Gb disk space used - I also took the opportunity to remove some stuff which we wont need going forward, but is left in the frozen repo 'in case'.

The fresh repo of course is at revision 1, whereas the working copies from the now frozen one are at revision (around) 9000.
As a result my plan of 'just' doing an update fails with an error.

Looking around it seems that the normal way around this is to create 9000 odd empty commmits to make the numbers match and then it should work ok. That seems a bit of a bodge.
Would that work?

Given I was trying to lose the history to save space it will create a load of history I dont need. Are empty commits tiny?

Is there a work around, or is the best recourse just to accept the need to do a fresh checkout on all the machines?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文