有没有办法将 Trac Wiki 页面转换为 HTML?
我看到使用 Mylyn WikiText 将 wiki 页面转换为 html 来自这个问题,但我不确定这是否是我仅通过阅读网站首页所寻找的内容。我会进一步调查。虽然我更喜欢它是一个 Trac 插件,这样我就可以从 wiki 选项中启动转换,但是 Trac 上的所有插件-Hacks 仅导出单个页面,而我想一次性转储所有格式化页面。
那么现有的 Trac 插件或独立应用程序是否可以满足我的要求?如果不是,您会指出我从哪里开始考虑自己实现该功能?
I see the suggestion of using Mylyn WikiText to convert wiki pages to html from this question except I'm not sure if its what I'm looking for from reading the front page of the site alone. I'll look into it further. Though I would prefer it being a Trac plug-in so I could initiate the conversion from within the wiki options but all the plugins at Trac-Hacks export single pages only whereas I want to dump all formatted pages in one go.
So is there an existing Trac plug-in or stand-alone application that'll meet my requirements? If not where would you point me to start looking at implementing that functionality myself?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您可能会在 此票证的评论中找到一些有用的信息。 org">trac-hacks。一位用户报告使用
wget
实用程序创建 wiki 的镜像副本,就像它是一个普通网站一样。另一位用户报告使用 XmlRpc 插件 提取任何给定 wiki 页面的 HTML 版本,但此方法可能需要您创建与插件交互的脚本。不幸的是,海报没有提供任何示例代码,但 XmlRpc 插件页面包含大量文档和示例来帮助您入门。如果您有权访问托管 Trac 的服务器上的命令行,则可以使用
trac-admin
命令如下:检索指定 wiki 页面的纯文本版本。然后,您必须将 wiki 语法解析为 HTML,但有一些工具可以执行此操作。
You may find some useful information in the comments for this ticket on trac-hacks. One user reports using the
wget
utility to create a mirror copy of the wiki as if it was a normal website. Another user reports using the XmlRpc plugin to extract HTML versions of any given wiki page, but this method would probably require you to create a script to interface with the plugin. The poster didn't provide any example code, unfortunately, but the XmlRpc Plugin page includes a decent amount of documentation and samples to get you started.If you have access to a command line on the server hosting Trac, you can use the
trac-admin
command like:to retrieve a plain-text version of the specified wiki page. You would then have to parse the wiki syntax to HTML, but there are tools available to do that.
出于我们的目的,我们希望单独导出每个 wiki 页面,而不包含页眉/页脚和其他特定于实例的内容。为此,XML-RPC 接口非常适合。这是我创建的 Python 3.6+ 脚本,用于将整个 wiki 导出到当前目录中的 HTML 文件中。请注意,此技术不会重写任何超链接,因此它们将绝对解析到该站点。
该脚本仅需要Python 3.6,因此下载并保存到export-wiki.py 文件,然后设置TRAC_URL 环境变量并调用该脚本。例如在 Unix 上:
它将提示输入密码。如果不需要密码,只需按 Enter 键即可绕过。如果需要不同的用户名,还需设置 USER 环境变量。密钥环支持也可用,但可以忽略。
For our purposes, we wanted to export each of the wiki pages individually without the header/footer and other instance-specific content. For this purpose, the XML-RPC interface was a good fit. Here's the Python 3.6+ script I created for exporting the whole of the wiki into HTML files in the current directory. Note that this technique doesn't rewrite any hyperlinks, so they will resolve absolutely to the site.
This script requires only Python 3.6, so download and save to a export-wiki.py file, then set the TRAC_URL environment variable and invoke the script. For example on Unix:
It will prompt for a password. If no password is required, just hit enter to bypass. If a different username is needed, also set the USER environment variable. Keyring support is also available but can be disregarded.