使用 wget 下载页面上所有 hulkshare/mediafire 链接的文件
所以我一直在尝试设置 wget 从 www.goodmusicalday.com 下载所有 mp3。不幸的是,该网站并没有托管这些 mp3,而是将它们放在 www.hulkshare.com 上,然后链接到下载页面。有没有办法利用 wget 的递归和过滤功能使其转到每个 hulkshare 页面并下载链接的 mp3?
非常感谢任何帮助
So I've been trying to set up wget to download all the mp3s from www.goodmusicallday.com. Unfortunately, rather than the mp3s being hosted by the site, the site puts them up on www.hulkshare.com and then links to the download pages. Is there a way to use the recursive and filtering abilities of wget to make it go to each hulkshare page and download the linked mp3?
Any help is much appreciated
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
所以,我的一个朋友实际上找到了一个很棒的方法来做到这一点,只需在终端中输入以下代码:
So, a friend of mine actually figured out an awesome way to do this, just enter the code below in Terminal:
我想不是!
我曾多次尝试从 mediafire 进行脚本下载,但没有成功。
这就是为什么他们没有简单的下载链接,而是附有计时器的原因!
如果您仔细观察,您会看到下载链接(我的意思是实际的文件托管服务器不是 www.mediafire.com!而是类似 download666.com 之类的东西)。
所以,我认为 wget 不可能!
只有当下载链接是简单的 html 链接(即 a 标签)时,Wget 才能挽救局面。
问候,
I guess not!!!
I have tried on several occasion to do scripted downloads from mediafire, but in vain.
and that's the reason why they don't have a simple download link, instead have a timer attached to!
If you have noticed carefully, you will see that the download links(i mean the actual file hosting server is not www.mediafire.com! but rather something like download666.com).
So, i don't think it is possible with wget!!
Wget can only save the day if download links are simple html links, the a tags.
Regards,