作为黑盒测试程序,我如何验证页面上的所有链接
我正在尝试验证我的所有页面链接是否有效,以及与我类似的内容是否所有页面都有指定的链接,例如联系人 >。我使用 python 单元测试和 selenium IDE 来记录需要测试的操作。 所以我的问题是我可以验证循环中的链接还是我需要自己尝试每个链接? 我尝试用 __iter__ 来做到这一点,但它没有得到任何接近,这可能是我不擅长 oop 的原因,但我仍然认为必须有另一种测试链接的方法单击它们并一一记录。
I'm tryng to verify if all my page links are valid, and also something similar to me if all the pages have a specified link like contact. i use python unit testing and selenium IDE to record actions that need to be tested.
So my question is can i verify the links in a loop or i need to try every link on my own?
i tried to do this with __iter__
but it didn't get any close ,there may be a reason that i'm poor at oop, but i still think that there must me another way of testing links than clicking them and recording one by one.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
我只想使用标准 shell 命令:
links
然后使用 grep --files-without-match 扫描结果文件以查找这些文件
没有联系链接。
如果您使用的是 Windows,则可以安装 cygwin 或安装 这些工具的 win32 端口。
编辑:嵌入上面的
使用 wget 检测损坏的链接
链接中的信息:I would just use standard shell commands for this:
links
then scan the resulting files with
grep --files-without-match
to find thosethat don't have a contact link.
If you're on windows, you can install cygwin or install the win32 ports of these tools.
EDIT: Embed Info from the
use wget to detect broken links
link above:虽然该工具是用 Perl 编写的,但您检查过 linklint 吗?这是一个应该完全满足您的需求的工具。它将解析 HTML 文档中的链接,并在链接损坏时告诉您。
如果您尝试从 Python 脚本自动执行此操作,则需要将其作为子进程运行并获取结果,但我认为它会满足您的需求。
Though the tool is in Perl, have you checked out linklint? It's a tool which should fit your needs exactly. It will parse links in an HTML doc and will tell you when they are broken.
If you're trying to automate this from a Python script, you'd need to run it as a subprocess and get the results, but I think it would get you what you're looking for.
“测试链接”到底是什么?
如果这意味着它们会导致非 4xx URI,那么恐怕您必须访问它们。
至于是否存在给定的链接(如“联系人”),您可以使用 xpath 查找它们。
What exactly is "Testing links"?
If it means they lead to non-4xx URIs, I'm afraid You must visit them.
As for existence of given links (like "Contact"), You may look for them using xpath.
您可以(作为另一种选择)使用 BeautifulSoup 解析页面上的链接并尝试通过 urllib2 检索它们。
You could (as yet another alternative), use BeautifulSoup to parse the links on your page and try to retrieve them via urllib2.