C#中模拟缓冲模式下的Fiddler请求

发布于 2024-12-06 13:28:33 字数 2107 浏览 1 评论 0原文

我正在构建一个网络抓取或爬虫 C# .NET 应用程序,它不断向服务器发送请求以收集一些信息。问题是,对于该特定服务器的某些网页,Web 响应始终是 404 未找到。然而令人惊讶的是,我发现只要“Fiddler”正在工作,问题似乎就会消失,并且请求会返回成功的响应。自从寻求答案以来,我一直在网上搜索,但没有找到。从好的方面来说,在搜索网络并分析 Fiddler 的时间线功能后,我得出了一些结论。

1.Fiddler使用Buffered模式加载这些网页,而我的应用程序使用Stream模式。 2.Fiddler 似乎重复使用了连接,或者换句话说 Keep-Alive 设置为 true。

现在的问题是我如何模仿或模拟 Fiddler 在缓冲模式下加载 Web 响应的方式,以及 Fiddler 是否实际上做了一些技巧(即修改响应)来获得正确的响应。我正在使用 HttpWebRequest 和 HttpWebResponse 来请求我的页面。我需要一种方法在将数据返回到客户端(这是我的服务器)之前完全缓冲 httpweb 响应。

公共静态 String getCookie(字符串用户名,字符串密码) { HttpWebRequest request = (HttpWebRequest)WebRequest.Create("某个链接");

       request.UserAgent = "Mozilla/5.0 (Windows NT 6.0; rv:6.0.2) Gecko/20100101 Firefox/6.0.2";


       request.Credentials = new NetworkCredential(username, password);


       HttpWebResponse wr = (HttpWebResponse)request.GetResponse();
           String y = wr.Headers["Set-Cookie"].ToString();
           return y.Replace("; path=/", "");


   }

   /// <summary>
   /// Requests the html source of a given web page, using the request credentials given.
   /// </summary>
   /// <param name="username"></param>
   /// <param name="password"></param>
   /// <param name="webPageLink"></param>
   /// <returns></returns>
   public static String requestSource(String username,String password,String webPageLink){
       String source = "";

           HttpWebRequest request = (HttpWebRequest)WebRequest.Create(webPageLink);


       if (username != null && password != null)
       {
           request.Headers["Cookie"] = getCookie(username, password);


           request.UserAgent = "Mozilla/5.0 (Windows NT 6.0; rv:6.0.2) Gecko/20100101 Firefox/6.0.2";

           request.Credentials = new NetworkCredential(username, password);
       }
       StreamReader sr;

       using (HttpWebResponse wr = (HttpWebResponse)request.GetResponse())
       {
           sr = new StreamReader(wr.GetResponseStream());
           source = sr.ReadToEnd();
       }



       return source;
   }

I am building a web scraping or crawler C# .NET application that keeps sending requests to a server to collect some information. The problem is that for certain web pages for this specific server that web response is always a 404 not found. However surprisingly enough I've discovered that as long as "Fiddler" is working the problem seems to vanish and the request returns with a successful response. I've been searching the web since seeking an answer but found none. On a brighter side, after searching the web and analysing Fiddler's timeline feature I have came to some conclusions.

1.Fiddler loads these web pages using Buffered mode while my application uses Stream mode.
2.It also appears that Fiddler reuses the connection or in other word Keep-Alive is set to be true.

And now the question is how can I mimic or simulate the way Fiddler loads the web response in Buffered mode, and whether Fiddler actually does some trick (i.e modifies the response) to get the correct response. I am using HttpWebRequest and HttpWebResponse to request my pages. I need a way to buffer httpwebresponse completely before returning data to client(which is my server).

public static String getCookie(String username, String password)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("certain link");

       request.UserAgent = "Mozilla/5.0 (Windows NT 6.0; rv:6.0.2) Gecko/20100101 Firefox/6.0.2";


       request.Credentials = new NetworkCredential(username, password);


       HttpWebResponse wr = (HttpWebResponse)request.GetResponse();
           String y = wr.Headers["Set-Cookie"].ToString();
           return y.Replace("; path=/", "");


   }

   /// <summary>
   /// Requests the html source of a given web page, using the request credentials given.
   /// </summary>
   /// <param name="username"></param>
   /// <param name="password"></param>
   /// <param name="webPageLink"></param>
   /// <returns></returns>
   public static String requestSource(String username,String password,String webPageLink){
       String source = "";

           HttpWebRequest request = (HttpWebRequest)WebRequest.Create(webPageLink);


       if (username != null && password != null)
       {
           request.Headers["Cookie"] = getCookie(username, password);


           request.UserAgent = "Mozilla/5.0 (Windows NT 6.0; rv:6.0.2) Gecko/20100101 Firefox/6.0.2";

           request.Credentials = new NetworkCredential(username, password);
       }
       StreamReader sr;

       using (HttpWebResponse wr = (HttpWebResponse)request.GetResponse())
       {
           sr = new StreamReader(wr.GetResponseStream());
           source = sr.ReadToEnd();
       }



       return source;
   }

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

情绪少女 2024-12-13 13:28:33

您是否尝试查看 HttpWebRequestAllowWriteStreamBuffering 属性?您也可以尝试将所有 Fiddler 的标头附加到您的请求中,以尽可能接近 Fiddler。

Did you try to take a look at the HttpWebRequest's AllowWriteStreamBuffering property? Also you could try to append all the Fiddler's headers to your request to be as close to Fiddler as you can.

吖咩 2024-12-13 13:28:33

是否可能是您的抓取工具正在被检测到并关闭,而 Fiddler 会减慢它的速度,使其不会被检测到? http://google-scraper.squabbel.com/

Could it be that your scraper is being detected and shut down and Fiddler slows it enough so it doesn't get detected? http://google-scraper.squabbel.com/

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文