Mercurial - 将更改推送到服务器存储库时 CPU 使用率较高
我们最近在我们公司的一个团队中实施了 Mercurial 作为试用,然后再将其推广给所有人。到目前为止,一切都进展顺利。但是,当我们让另一个团队访问 Mercurial 时,我们在将更改从本地存储库推送到服务器存储库时遇到了问题。
当推送正在进行时,python.exe 进程使用服务器上接近 100% 的 CPU。这是通过一次推动进行的。它最初是 100%,但我们在 hgrc 中添加了 server.uncompressed = true,这似乎有点帮助,但它仍然很高。
该服务器是一台在具有 2 GB RAM 的 Intel Xeon 3 GHz 上运行 Windows Server 2008 Standard 的 VM。
谷歌搜索没有产生任何有用的信息。 SO有什么建议吗?
We recently implemented Mercurial within one of our teams at our company as a trial before rolling it out to everyone. So far, everything has gone pretty good. But when we let another team access to Mercurial, we ran into an issue when pushing changes from our local repository to the server repository.
The python.exe process is using close to 100% CPU on the server when a push is in progress. And this is with one single push going on. It was at originally at 100% but we added server.uncompressed = true in the hgrc and it seemed a help a little bit but it's still high.
The server is a VM running Windows Server 2008 Standard on an Intel Xeon 3 GHz with 2 GB RAM.
Doing a Google search yielded no useful information. Does SO have any suggestions?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
你真的确定某个地方有问题吗?
我真的不明白使用所有可用的 CPU 能力来完成这项工作有什么问题。
使用压缩可能“有帮助”,因为 python 必须访问等待解压缩库完成其工作。
您喜欢在 20% 负载时等待 5 秒,还是在 100% 负载时等待 1 秒?
Are you really sure there's a problem somewhere ?
I really don't see what's wrong with using all the CPU power at disposition to do the job.
Using compression probably "helped" because python has to access to wait for the decompression library to do its work.
Do you prefer to wait for 5 seconds at 20% load, or 1 second at 100% load ?
您可能会遇到问题#135。尝试通过 https 托管存储库,而不是通过 ssh 访问它。
You might be running into issue #135. Try hosting the repository over https instead of accessing it via ssh.