使用crawler4j。如何保存网站数据?
我已经开始使用crawler4j,它似乎可以毫无问题地查找网站。然而,我需要保存爬取的数据。 crawler4j支持这个功能吗?
我尝试过使用高级java源代码(和Downloader.java),但它似乎不起作用
具体来说,下面的代码从不打印任何内容。
Downloader myDownloader = new Downloader();
Page page = myDownloader.download("http://ics.uci.edu");
if (page != null) {
System.out.println(page.getText());
}
我希望对此有一些意见,
谢谢
I have started using crawler4j and it seems to be looking up websites with no issues. Yet, I need to save the crawled data. Does crawler4j support this functionality?
I have tried using the advanced java source code (and Downloader.java) but it doesn't seem to be working
Specifically the code below never prints anything.
Downloader myDownloader = new Downloader();
Page page = myDownloader.download("http://ics.uci.edu");
if (page != null) {
System.out.println(page.getText());
}
I would appreciate some input on this
Thank you
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
如果您正在滚动扩展 WebCrawler 类的自己的 Crawler,您可以访问在 Visit(Page) 方法中爬网的页面的详细信息。例如,以下内容将为您提供页面的内容:
从那里您可以将其保存到磁盘或应用所需的任何处理。
If you're rolling your own Crawler that extends the WebCrawler class you have access to the details of the page that was crawled in the Visit(Page) method. For example the following will get you the contents of the page:
From there you can save it to disk or apply any processing that's required.
你尝试过其他页面吗?事实上,您使用的网址缺少“www”。正确的是 http://www.ics.uci.edu/
Did you try it with other pages? In fact, the url you're using is missing a "www". The correct one is http://www.ics.uci.edu/