以编程方式登录论坛,然后进行屏幕截图

发布于 2024-07-09 14:11:32 字数 340 浏览 7 评论 0原文

我想登录社区服务器的论坛部分(例如 http://forums.timesnapper.com/login.aspx?ReturnUrl=/forums/default.aspx),然后下载特定页面并执行正则表达式(以查看是否有任何帖子等待审核) 。 如果有的话我想发邮件。

我想从 Linux 服务器上执行此操作。

目前我知道如何下载页面(使用例如 wget),但登录时遇到问题。有什么好主意吗?

I'd like to login to the Forums part of community-server (e.g. http://forums.timesnapper.com/login.aspx?ReturnUrl=/forums/default.aspx) and then download a specific page and perform a regex (to see if there are any posts waiting for moderation). If there is, I'd like to send an email.

I'd like to do this from a Linux server.

Currently I know how to download a page (using e.g. wget) but have a problem logging in. Any bright idea how that works?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

蓝色星空 2024-07-16 14:11:32

查看登录页面的源代码,它似乎是一个 asp.net 应用程序,因此您可能需要做一些事情才能实现此目的 -

管理表单隐藏 __viewstate 字段并在提交登录详细信息时将其发回。

一旦您克服了这一点,我猜您可以仅使用绝对 URL 来引用有问题的特定页面,但您需要处理 ASP.NET Forms 身份验证 cookie 并将其作为 GET 请求的一部分发送。

Looking at the source of the login page it appears to be an asp.net app so you'd need to probably do a couple things to achieve this -

Manage the form hidden __viewstate field and post that back when you submit the login details.

Once you get past that I'm guessing you can reference the specific page in question just using an absolute URL but you'd need to handle the ASP.NET Forms authentication cookie and send that as part of the GET request.

梦行七里 2024-07-16 14:11:32

您可能会在 Selenium 方面有更好的运气,或者查看此问题以获取更多建议:

大学课程注册脚本

You might have better luck with Selenium or see this question for more suggestions:

Script for College Class Registration

寄人书 2024-07-16 14:11:32

就我个人而言,我会使用 WWW::Mechanize 用 Perl 编写它,然后执行类似的东西:


my $login_url = 'login url here';
my $username = 'username';
my $password = 'password';
my $mech = new WWW::Mechanize;
$mech->get($login_url)
    or die "Failed to fetch login page";
$mech->set_visible($username, $password)
    or die "Failed to find fields to complete";
$mech->submit
    or die "Failed to submit form";

if ($mech->content() =~ /posts awaiting moderation/i) {
    # Do something here
}

我不知道上面的方法是否有效,因为我没有社区服务器(无论是什么)的登录详细信息来测试它,但它应该给你一些你可以轻松工作的东西,并且显示了 WWW::Mechanize 的力量。

Personally, I'd write it in Perl, using WWW::Mechanize, and do something like:


my $login_url = 'login url here';
my $username = 'username';
my $password = 'password';
my $mech = new WWW::Mechanize;
$mech->get($login_url)
    or die "Failed to fetch login page";
$mech->set_visible($username, $password)
    or die "Failed to find fields to complete";
$mech->submit
    or die "Failed to submit form";

if ($mech->content() =~ /posts awaiting moderation/i) {
    # Do something here
}

I've no idea whether the above will work, as I don't have login details to a Community Server (whatever that is) to test it against, but it should give you something you could work from easily enough, and shows the power of WWW::Mechanize.

谁许谁一生繁华 2024-07-16 14:11:32

您可以使用 wget 完成这一切。 您需要使用 POST 提交表单并需要存储 cookie。 wget 手册页中的相关内容:

--post-data=string
--post-file=file

Use POST as the method for all HTTP requests and send the specified data in the request body.
"--post-data" sends string as data, whereas "--post-file" sends the contents of file.  Other than
that, they work in exactly the same way.

This example shows how to log to a server using POST and then proceed to download the desired pages,
presumably only accessible to authorized users:

       # Log in to the server.  This can be done only once.
       wget --save-cookies cookies.txt \
            --post-data 'user=foo&password=bar' \
            http://server.com/auth.php

       # Now grab the page or pages we care about.
       wget --load-cookies cookies.txt \
            -p http://server.com/interesting/article.php

You can do it all with wget. You need to submit form using POST and need to store cookies. Relevant stuff from the wget man page:

--post-data=string
--post-file=file

Use POST as the method for all HTTP requests and send the specified data in the request body.
"--post-data" sends string as data, whereas "--post-file" sends the contents of file.  Other than
that, they work in exactly the same way.

This example shows how to log to a server using POST and then proceed to download the desired pages,
presumably only accessible to authorized users:

       # Log in to the server.  This can be done only once.
       wget --save-cookies cookies.txt \
            --post-data 'user=foo&password=bar' \
            http://server.com/auth.php

       # Now grab the page or pages we care about.
       wget --load-cookies cookies.txt \
            -p http://server.com/interesting/article.php
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文