使用 Python“请求”进行代理模块

发布于 2024-12-18 17:48:29 字数 482 浏览 3 评论 0原文

这是关于 Python 优秀 Requests 模块的简短介绍。

我似乎无法在文档中找到变量“代理”应包含的内容。当我向它发送一个带有标准“IP:PORT”值的字典时,它拒绝了它要求的 2 个值。 所以,我猜(因为这似乎没有在文档中涵盖)第一个值是 ip,第二个值是端口?

文档仅提到这一点:

代理 –(可选)字典映射协议到代理的 URL。

所以我尝试了这个...我应该做什么?

proxy = { ip: port}

在将它们放入字典之前我应该​​将它们转换为某种类型吗?

r = requests.get(url,headers=headers,proxies=proxy)

Just a short, simple one about the excellent Requests module for Python.

I can't seem to find in the documentation what the variable 'proxies' should contain. When I send it a dict with a standard "IP:PORT" value it rejected it asking for 2 values.
So, I guess (because this doesn't seem to be covered in the docs) that the first value is the ip and the second the port?

The docs mention this only:

proxies – (optional) Dictionary mapping protocol to the URL of the proxy.

So I tried this... what should I be doing?

proxy = { ip: port}

and should I convert these to some type before putting them in the dict?

r = requests.get(url,headers=headers,proxies=proxy)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(12

最笨的告白 2024-12-25 17:48:29

proxies 的字典语法为 {"protocol": "scheme://ip:port", ...}。使用它,您可以为使用 httphttpsftp 协议的请求指定不同(或相同)代理:

http_proxy  = "http://10.10.1.10:3128"
https_proxy = "https://10.10.1.11:1080"
ftp_proxy   = "ftp://10.10.1.10:3128"

proxies = { 
              "http"  : http_proxy, 
              "https" : https_proxy, 
              "ftp"   : ftp_proxy
            }

r = requests.get(url, headers=headers, proxies=proxies)

请求文档

参数:
method – 新 Request 对象的方法。
url – 新 Request 对象的 URL。
...
proxies –(可选)字典映射 协议到代理的URL
...


在 Linux 上,您还可以通过 HTTP_PROXYHTTPS_PROXYFTP_PROXY 环境变量来执行此操作:

export HTTP_PROXY=10.10.1.10:3128
export HTTPS_PROXY=10.10.1.11:1080
export FTP_PROXY=10.10.1.10:3128

在 Windows 上:

set http_proxy=10.10.1.10:3128
set https_proxy=10.10.1.11:1080
set ftp_proxy=10.10.1.10:3128

The proxies' dict syntax is {"protocol": "scheme://ip:port", ...}. With it you can specify different (or the same) proxie(s) for requests using http, https, and ftp protocols:

http_proxy  = "http://10.10.1.10:3128"
https_proxy = "https://10.10.1.11:1080"
ftp_proxy   = "ftp://10.10.1.10:3128"

proxies = { 
              "http"  : http_proxy, 
              "https" : https_proxy, 
              "ftp"   : ftp_proxy
            }

r = requests.get(url, headers=headers, proxies=proxies)

Deduced from the requests documentation:

Parameters:
method – method for the new Request object.
url – URL for the new Request object.
...
proxies – (optional) Dictionary mapping protocol to the URL of the proxy.
...


On linux you can also do this via the HTTP_PROXY, HTTPS_PROXY, and FTP_PROXY environment variables:

export HTTP_PROXY=10.10.1.10:3128
export HTTPS_PROXY=10.10.1.11:1080
export FTP_PROXY=10.10.1.10:3128

On Windows:

set http_proxy=10.10.1.10:3128
set https_proxy=10.10.1.11:1080
set ftp_proxy=10.10.1.10:3128
梦屿孤独相伴 2024-12-25 17:48:29

您可以在此处参考代理文档

如果您需要使用代理,则可以使用任何请求方法的 proxies 参数配置单个请求:

import requests

proxies = {
  "http": "http://10.10.1.10:3128",
  "https": "https://10.10.1.10:1080",
}

requests.get("http://example.org", proxies=proxies)

要对代理使用 HTTP 基本身份验证,请使用 http://user:[电子邮件受保护]/ 语法:

proxies = {
    "http": "http://user:[email protected]:3128/"
}

You can refer to the proxy documentation here.

If you need to use a proxy, you can configure individual requests with the proxies argument to any request method:

import requests

proxies = {
  "http": "http://10.10.1.10:3128",
  "https": "https://10.10.1.10:1080",
}

requests.get("http://example.org", proxies=proxies)

To use HTTP Basic Auth with your proxy, use the http://user:[email protected]/ syntax:

proxies = {
    "http": "http://user:[email protected]:3128/"
}
缱倦旧时光 2024-12-25 17:48:29

我发现 urllib 有一些非常好的代码来获取系统的代理设置,并且它们恰好采用正确的形式可以直接使用。你可以这样使用它:

import urllib

...
r = requests.get('http://example.org', proxies=urllib.request.getproxies())

它工作得非常好,并且 urllib 也知道如何获取 Mac OS X 和 Windows 设置。

I have found that urllib has some really good code to pick up the system's proxy settings and they happen to be in the correct form to use directly. You can use this like:

import urllib

...
r = requests.get('http://example.org', proxies=urllib.request.getproxies())

It works really well and urllib knows about getting Mac OS X and Windows settings as well.

随波逐流 2024-12-25 17:48:29

接受的答案对我来说是一个好的开始,但我不断收到以下错误:

AssertionError: Not supported proxy scheme None

解决此问题的方法是在代理网址中指定 http:// 因此:

http_proxy  = "http://194.62.145.248:8080"
https_proxy  = "https://194.62.145.248:8080"
ftp_proxy   = "10.10.1.10:3128"

proxyDict = {
              "http"  : http_proxy,
              "https" : https_proxy,
              "ftp"   : ftp_proxy
            }

我很想知道为什么原始版本对某些人有效,但是不是我。

编辑:我看到主要答案现在已更新以反映这一点:)

The accepted answer was a good start for me, but I kept getting the following error:

AssertionError: Not supported proxy scheme None

Fix to this was to specify the http:// in the proxy url thus:

http_proxy  = "http://194.62.145.248:8080"
https_proxy  = "https://194.62.145.248:8080"
ftp_proxy   = "10.10.1.10:3128"

proxyDict = {
              "http"  : http_proxy,
              "https" : https_proxy,
              "ftp"   : ftp_proxy
            }

I'd be interested as to why the original works for some people but not me.

Edit: I see the main answer is now updated to reflect this :)

衣神在巴黎 2024-12-25 17:48:29

如果您想保留 cookie 和会话数据,最好这样做:

import requests

proxies = {
    'http': 'http://user:[email protected]:3128',
    'https': 'https://user:[email protected]:3128',
}

# Create the session and set the proxies.
s = requests.Session()
s.proxies = proxies

# Make the HTTP request through the session.
r = s.get('http://www.showmemyip.com/')

If you'd like to persisist cookies and session data, you'd best do it like this:

import requests

proxies = {
    'http': 'http://user:[email protected]:3128',
    'https': 'https://user:[email protected]:3128',
}

# Create the session and set the proxies.
s = requests.Session()
s.proxies = proxies

# Make the HTTP request through the session.
r = s.get('http://www.showmemyip.com/')
守不住的情 2024-12-25 17:48:29

晚了8年。但我喜欢:

import os
import requests

os.environ['HTTP_PROXY'] = os.environ['http_proxy'] = 'http://http-connect-proxy:3128/'
os.environ['HTTPS_PROXY'] = os.environ['https_proxy'] = 'http://http-connect-proxy:3128/'
os.environ['NO_PROXY'] = os.environ['no_proxy'] = '127.0.0.1,localhost,.local'

r = requests.get('https://example.com')  # , verify=False

8 years late. But I like:

import os
import requests

os.environ['HTTP_PROXY'] = os.environ['http_proxy'] = 'http://http-connect-proxy:3128/'
os.environ['HTTPS_PROXY'] = os.environ['https_proxy'] = 'http://http-connect-proxy:3128/'
os.environ['NO_PROXY'] = os.environ['no_proxy'] = '127.0.0.1,localhost,.local'

r = requests.get('https://example.com')  # , verify=False
温柔一刀 2024-12-25 17:48:29

文档
给出了一个非常清晰的代理使用示例

import requests

proxies = {
  'http': 'http://10.10.1.10:3128',
  'https': 'http://10.10.1.10:1080',
}

requests.get('http://example.org', proxies=proxies)

,但是,没有记录的是,即使架构相同,您甚至可以为单个 url 配置代理!
当您想对要抓取的不同网站使用不同的代理时,这会派上用场。

proxies = {
  'http://example.org': 'http://10.10.1.10:3128',
  'http://something.test': 'http://10.10.1.10:1080',
}

requests.get('http://something.test/some/url', proxies=proxies)

此外,requests.get本质上在底层使用了requests.Session,因此如果您需要更多控制,请直接使用它

import requests

proxies = {
  'http': 'http://10.10.1.10:3128',
  'https': 'http://10.10.1.10:1080',
}
session = requests.Session()
session.proxies.update(proxies)

session.get('http://example.org')

我用它来设置后备(默认代理) )处理与字典中指定的架构/URL 不匹配的所有流量

import requests

proxies = {
  'http': 'http://10.10.1.10:3128',
  'https': 'http://10.10.1.10:1080',
}
session = requests.Session()
session.proxies.setdefault('http', 'http://127.0.0.1:9009')
session.proxies.update(proxies)

session.get('http://example.org')

The documentation
gives a very clear example of the proxies usage

import requests

proxies = {
  'http': 'http://10.10.1.10:3128',
  'https': 'http://10.10.1.10:1080',
}

requests.get('http://example.org', proxies=proxies)

What isn't documented, however, is the fact that you can even configure proxies for individual urls even if the schema is the same!
This comes in handy when you want to use different proxies for different websites you wish to scrape.

proxies = {
  'http://example.org': 'http://10.10.1.10:3128',
  'http://something.test': 'http://10.10.1.10:1080',
}

requests.get('http://something.test/some/url', proxies=proxies)

Additionally, requests.get essentially uses the requests.Session under the hood, so if you need more control, use it directly

import requests

proxies = {
  'http': 'http://10.10.1.10:3128',
  'https': 'http://10.10.1.10:1080',
}
session = requests.Session()
session.proxies.update(proxies)

session.get('http://example.org')

I use it to set a fallback (a default proxy) that handles all traffic that doesn't match the schemas/urls specified in the dictionary

import requests

proxies = {
  'http': 'http://10.10.1.10:3128',
  'https': 'http://10.10.1.10:1080',
}
session = requests.Session()
session.proxies.setdefault('http', 'http://127.0.0.1:9009')
session.proxies.update(proxies)

session.get('http://example.org')
薄荷港 2024-12-25 17:48:29

我刚刚制作了一个代理抓取器,也可以在没有任何输入的情况下与相同的抓取代理连接
这是:

#Import Modules

from termcolor import colored
from selenium import webdriver
import requests
import os
import sys
import time

#Proxy Grab

options = webdriver.ChromeOptions()
options.add_argument('headless')
driver = webdriver.Chrome(chrome_options=options)
driver.get("https://www.sslproxies.org/")
tbody = driver.find_element_by_tag_name("tbody")
cell = tbody.find_elements_by_tag_name("tr")
for column in cell:

        column = column.text.split(" ")
        print(colored(column[0]+":"+column[1],'yellow'))
driver.quit()
print("")

os.system('clear')
os.system('cls')

#Proxy Connection

print(colored('Getting Proxies from graber...','green'))
time.sleep(2)
os.system('clear')
os.system('cls')
proxy = {"http": "http://"+ column[0]+":"+column[1]}
url = 'https://mobile.facebook.com/login'
r = requests.get(url,  proxies=proxy)
print("")
print(colored('Connecting using proxy' ,'green'))
print("")
sts = r.status_code

i just made a proxy graber and also can connect with same grabed proxy without any input
here is :

#Import Modules

from termcolor import colored
from selenium import webdriver
import requests
import os
import sys
import time

#Proxy Grab

options = webdriver.ChromeOptions()
options.add_argument('headless')
driver = webdriver.Chrome(chrome_options=options)
driver.get("https://www.sslproxies.org/")
tbody = driver.find_element_by_tag_name("tbody")
cell = tbody.find_elements_by_tag_name("tr")
for column in cell:

        column = column.text.split(" ")
        print(colored(column[0]+":"+column[1],'yellow'))
driver.quit()
print("")

os.system('clear')
os.system('cls')

#Proxy Connection

print(colored('Getting Proxies from graber...','green'))
time.sleep(2)
os.system('clear')
os.system('cls')
proxy = {"http": "http://"+ column[0]+":"+column[1]}
url = 'https://mobile.facebook.com/login'
r = requests.get(url,  proxies=proxy)
print("")
print(colored('Connecting using proxy' ,'green'))
print("")
sts = r.status_code
她比我温柔 2024-12-25 17:48:29

这是我在 python 中的请求模块的基本类,带有一些代理配置和秒表!

import requests
import time
class BaseCheck():
    def __init__(self, url):
        self.http_proxy  = "http://user:pw@proxy:8080"
        self.https_proxy = "http://user:pw@proxy:8080"
        self.ftp_proxy   = "http://user:pw@proxy:8080"
        self.proxyDict = {
                      "http"  : self.http_proxy,
                      "https" : self.https_proxy,
                      "ftp"   : self.ftp_proxy
                    }
        self.url = url
        def makearr(tsteps):
            global stemps
            global steps
            stemps = {}
            for step in tsteps:
                stemps[step] = { 'start': 0, 'end': 0 }
            steps = tsteps
        makearr(['init','check'])
        def starttime(typ = ""):
            for stemp in stemps:
                if typ == "":
                    stemps[stemp]['start'] = time.time()
                else:
                    stemps[stemp][typ] = time.time()
        starttime()
    def __str__(self):
        return str(self.url)
    def getrequests(self):
        g=requests.get(self.url,proxies=self.proxyDict)
        print g.status_code
        print g.content
        print self.url
        stemps['init']['end'] = time.time()
        #print stemps['init']['end'] - stemps['init']['start']
        x= stemps['init']['end'] - stemps['init']['start']
        print x


test=BaseCheck(url='http://google.com')
test.getrequests()

here is my basic class in python for the requests module with some proxy configs and stopwatch !

import requests
import time
class BaseCheck():
    def __init__(self, url):
        self.http_proxy  = "http://user:pw@proxy:8080"
        self.https_proxy = "http://user:pw@proxy:8080"
        self.ftp_proxy   = "http://user:pw@proxy:8080"
        self.proxyDict = {
                      "http"  : self.http_proxy,
                      "https" : self.https_proxy,
                      "ftp"   : self.ftp_proxy
                    }
        self.url = url
        def makearr(tsteps):
            global stemps
            global steps
            stemps = {}
            for step in tsteps:
                stemps[step] = { 'start': 0, 'end': 0 }
            steps = tsteps
        makearr(['init','check'])
        def starttime(typ = ""):
            for stemp in stemps:
                if typ == "":
                    stemps[stemp]['start'] = time.time()
                else:
                    stemps[stemp][typ] = time.time()
        starttime()
    def __str__(self):
        return str(self.url)
    def getrequests(self):
        g=requests.get(self.url,proxies=self.proxyDict)
        print g.status_code
        print g.content
        print self.url
        stemps['init']['end'] = time.time()
        #print stemps['init']['end'] - stemps['init']['start']
        x= stemps['init']['end'] - stemps['init']['start']
        print x


test=BaseCheck(url='http://google.com')
test.getrequests()
不离久伴 2024-12-25 17:48:29

已经测试过,以下代码有效。需要使用HTTPProxyAuth。

import requests
from requests.auth import HTTPProxyAuth


USE_PROXY = True
proxy_user = "aaa"
proxy_password = "bbb"
http_proxy = "http://your_proxy_server:8080"
https_proxy = "http://your_proxy_server:8080"
proxies = {
    "http": http_proxy,
    "https": https_proxy
}

def test(name):
    print(f'Hi, {name}')  # Press Ctrl+F8 to toggle the breakpoint.
    # Create the session and set the proxies.
    session = requests.Session()
    if USE_PROXY:
        session.trust_env = False
        session.proxies = proxies
        session.auth = HTTPProxyAuth(proxy_user, proxy_password)

    r = session.get('https://www.stackoverflow.com')
    print(r.status_code)

if __name__ == '__main__':
    test('aaa')

Already tested, the following code works. Need to use HTTPProxyAuth.

import requests
from requests.auth import HTTPProxyAuth


USE_PROXY = True
proxy_user = "aaa"
proxy_password = "bbb"
http_proxy = "http://your_proxy_server:8080"
https_proxy = "http://your_proxy_server:8080"
proxies = {
    "http": http_proxy,
    "https": https_proxy
}

def test(name):
    print(f'Hi, {name}')  # Press Ctrl+F8 to toggle the breakpoint.
    # Create the session and set the proxies.
    session = requests.Session()
    if USE_PROXY:
        session.trust_env = False
        session.proxies = proxies
        session.auth = HTTPProxyAuth(proxy_user, proxy_password)

    r = session.get('https://www.stackoverflow.com')
    print(r.status_code)

if __name__ == '__main__':
    test('aaa')
抠脚大汉 2024-12-25 17:48:29

虽然有点晚了,但这里有一个包装类,可以简化抓取代理,然后进行 http POST 或 GET:

ProxyRequests

https://github.com/rootVIII/proxy_requests

It’s a bit late but here is a wrapper class that simplifies scraping proxies and then making an http POST or GET:

ProxyRequests

https://github.com/rootVIII/proxy_requests
泅人 2024-12-25 17:48:29

我分享了一些代码,如何从网站“https://free-proxy-list.net”获取代理并将数据存储到与“Elite Proxy Switcher”(格式 IP:PORT)等工具兼容的文件中:

##PROXY_UPDATER -从 https://free-proxy-list.net/ 获取免费代理

from lxml.html import fromstring
import requests
from itertools import cycle
import traceback
import re

######################FIND PROXIES#########################################
def get_proxies():
    url = 'https://free-proxy-list.net/'
    response = requests.get(url)
    parser = fromstring(response.text)
    proxies = set()
    for i in parser.xpath('//tbody/tr')[:299]:   #299 proxies max
        proxy = ":".join([i.xpath('.//td[1]/text()') 
        [0],i.xpath('.//td[2]/text()')[0]])
        proxies.add(proxy)
    return proxies



######################write to file in format   IP:PORT######################
try:
    proxies = get_proxies()
    f=open('proxy_list.txt','w')
    for proxy in proxies:
        f.write(proxy+'\n')
    f.close()
    print ("DONE")
except:
    print ("MAJOR ERROR")

I share some code how to fetch proxies from the site "https://free-proxy-list.net" and store data to a file compatible with tools like "Elite Proxy Switcher"(format IP:PORT):

##PROXY_UPDATER - get free proxies from https://free-proxy-list.net/

from lxml.html import fromstring
import requests
from itertools import cycle
import traceback
import re

######################FIND PROXIES#########################################
def get_proxies():
    url = 'https://free-proxy-list.net/'
    response = requests.get(url)
    parser = fromstring(response.text)
    proxies = set()
    for i in parser.xpath('//tbody/tr')[:299]:   #299 proxies max
        proxy = ":".join([i.xpath('.//td[1]/text()') 
        [0],i.xpath('.//td[2]/text()')[0]])
        proxies.add(proxy)
    return proxies



######################write to file in format   IP:PORT######################
try:
    proxies = get_proxies()
    f=open('proxy_list.txt','w')
    for proxy in proxies:
        f.write(proxy+'\n')
    f.close()
    print ("DONE")
except:
    print ("MAJOR ERROR")
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文