如何忽略废纸中的URL参考

发布于 2025-01-21 08:25:04 字数 901 浏览 0 评论 0原文

我正在使用零工来刮擦一个包含很多菜单菜单的网站。 问题在于,我正在提取与网站中同一项目/subitem相对应的多个URL。我正在提取它们,就好像它们是不同的项目一样,因为URL包含“ Ref =”部分。 例如:

https://thestore/category1/subitem/subsubitem_ABC/ref=asd_asd_1
https://thestore/category1/subitem/subsubitem_ABC/ref=asd_asd_2
https://thestore/category1/subitem/subsubitem_ABC/ref=asd_asd_3
https://thestore/category1/subitem/subsubitem_ABC/ref=asd_asd_4

所有这些URL都对应于网站中的同一ssubsubitem_abc。 我想以这种方式提取与subbitem_abc相对应的一个URL

https://thestore/category1/subitem/subsubitem_ABC

,而是要减少爬虫的时间消耗,并避免对同一subsubitem或subItem或item或项目的重复URL。

到目前为止,我有这些规则:

rules = [
    Rule(
        LinkExtractor(
            restrict_xpaths=['my_xpath"]//a',],
        ),
        follow=True,
        callback='parse_categories'
    )
]

是否可以将某些内容添加到规则/Linkextractor中以避免URL中的引用?

I'm using Scrapy to scrape a website that contains a menu with a lot of sublevel menus.
The problem is that I'm extracting multiple URLs that correspond to the same item/subitem in the website. I'm extracting them as if they were different items because the URLs contain a "ref=" section.
For example:

https://thestore/category1/subitem/subsubitem_ABC/ref=asd_asd_1
https://thestore/category1/subitem/subsubitem_ABC/ref=asd_asd_2
https://thestore/category1/subitem/subsubitem_ABC/ref=asd_asd_3
https://thestore/category1/subitem/subsubitem_ABC/ref=asd_asd_4

All these URLs correspond to the same ssubsubitem_ABC in the website.
Instead of this, I would like to extract only one URL corresponding to the subsubitem_ABC

https://thestore/category1/subitem/subsubitem_ABC

This way, mi intention is to reduce the time consumption of the crawler and avoid duplicated URLs for the same subsubitem or subitem or item.

So far I have these rules:

rules = [
    Rule(
        LinkExtractor(
            restrict_xpaths=['my_xpath"]//a',],
        ),
        follow=True,
        callback='parse_categories'
    )
]

Is there something I can add to the Rule/LinkExtractor to avoid the references in the URLs?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

风筝有风,海豚有海 2025-01-28 08:25:04

如果您只想刮擦“ https:// thestore/category1/subitem/subiTem_abc/ref = asd_asd_1”,则可以使用正则表达式而不是x_path。它可以允许= r'https://thestore/category1/subitem/subitem_abc/ref(。*?)1'。希望这可以帮助您。

If you like to scrape only "https://thestore/category1/subitem/subsubitem_ABC/ref=asd_asd_1", you can use regular expression rather than X_path. It could be allow = r'https://thestore/category1/subitem/subsubitem_ABC/ref(.*?)1'. Hope this can help you.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文