为什么我的 hgweb 服务器这么慢?
我正在使用 hgweb 提供对许多 Mercurial 存储库的访问,将它们作为一个集合提供:
[collections]
/home/me = /home/me/projects
这在 localhost/projects 上提供了它们,
我在该位置有大约 30 个存储库,在一个源树中,其中有相当数量的其他非 Mercurial-管理的项目。
hgweb 的响应确实很慢;在 http://localhost/ 上提供列表大约需要 30 秒,打开项目大约需要 30 秒,使得将其用于共享目的很痛苦。
我该如何调整它以使其更快?
我正在 OSX 上运行,如果有什么区别的话。
I am serving up access to many mercurial repositories using hgweb, providing them as a collection:
[collections]
/home/me = /home/me/projects
This serves them up at localhost/projects
I have around 30 repositories at that location, in a source tree with a fair number of other, non-mercurial-managed projects.
hgweb is really slow to respond; it takes about 30 seconds to provide a listing at http://localhost/, and about 30 seconds to open a project, making it painful to use this for sharing purposes.
How can I tune this to make it faster?
I'm running on OSX, if it makes a difference.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
作为开源替代方案,您可以使用 RhodeCode http://rhodecode.com,它是完全用 Python 编写的 hgweb 替代品。
As an open-source alternative, you can use RhodeCode http://rhodecode.com it's hgweb replacement written entirely in Python.
AFAIK,
hgweb
将扫描其配置文件中[collections]
条目的所有子目录。由于其中有很多非 Mercurial 目录,因此它必须扫描每个目录的每个子目录。相反,它可以在包含 Mercurial 存储库的目录树的顶层停止扫描,因为它将在那里看到.hg
目录。如果您使用的是较新的 Mercurial(1.1 之后的版本),请尝试更改
hgweb.config
以使用[paths]
部分,并提供显式条目对于每个 Mercurial 存储库。AFAIK,
hgweb
will scan all subdirectories of the[collections]
entry in its configuration file. Since you've got a lot of non-Mercurial directories in there, it has to do a scan of each subdirectory of each of them. In contrast, it can stop scanning at the top level of a directory tree containing a Mercurial repository because it will see the.hg
directory there.If you're using a newer Mercurial (after 1.1, it looks like), try changing the
hgweb.config
to use a[paths]
section instead, and provide explicit entries for each of the Mercurial repositories.继上面 Niall 非常有用的答案之后,我意识到我需要一个工具来维护这个 [paths] 部分。我最终选择了这个(它使用 M. Foord 的 configobj 。
这该脚本由 OS X 的 cron 等价物每 15 分钟运行一次,并确保我的 hgweb 永远不会过时。
Following up on Niall's very helpful answer, above, I realized that I needed a tool to maintain this [paths] section. I ended up going with this (which uses configobj by M. Foord.
This script is run by OS X's equivalent of cron every 15 minutes and ensures that my hgweb never gets out of date.
问题可能是服务器在每个请求期间递归地搜索存储库。听起来你那里有一个相当大的目录,所以这是有道理的。
此表示法将与首选的
[paths]
属性一起使用,但我不确定它是否对[collections]
属性有帮助。尝试更改为,以便它只会向下搜索一级。
请在此处查看有关该问题的更多信息:
https://www.mercurial-scm.org/wiki/HgWebDirStepByStep
如果没有如果您更改为
[paths]
并使用*
表示法,那么它肯定不起作用。The problem is probably the server searching recursively for repos during every request. Sounds like you've got a pretty big directory there so this makes sense.
This notation will work with the preferred
[paths]
attribute but I am not sure about if it will help the[collections]
attribute. Try changing toso it will only search one level down.
Check here for more on the issue:
https://www.mercurial-scm.org/wiki/HgWebDirStepByStep
If that doesn't work it definitely will if you change to
[paths]
and use the*
notation.