即使使用卷发也找不到NSE印度的资源

发布于 2025-01-29 15:16:37 字数 1361 浏览 1 评论 0原文

因此,我正在通过使用curl命令来取消NSE India结果日历网站,即使在此之后,它也会给我“找不到资源”错误。这是我的代码

url = f"https://www.nseindia.com/api/event-calendar?index=equities"
header1 ="Host:www.nseindia.com"
header2 = "User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:82.0) Gecko/20100101 Firefox/82.0"
header3 ="Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
header4 ="Accept-Language:en-US,en;q=0.5"
header5 ="Accept-Encoding:gzip, deflate, br"
header6 ="DNT:1"
header7 ="Connection:keep-alive"
header8 ="Upgrade-Insecure-Requests:1"
header9 ="Pragma:no-cache"
header10 ="Cache-Control:no-cache"

    def run_curl_command(curl_command, max_attempts):
        result = os.popen(curl_command).read()
        count = 0
        while "Resource not found" in result and  count < max_attempts:
            result = os.popen(curl_command).read()
            count += 1
            time.sleep(1)
        print("API Read")
        result = json.loads(result)
        result = pd.DataFrame(result)

def init():
    max_attempts = 100
    curl_command = f'curl "{url}" -H "{header1}" -H "{header2}" -H "{header3}" -H "{header4}" -H "{header5}" -H "{header6}" -H "{header7}" -H "{header8}" -H "{header9}" -H "{header10}" --compressed '
    print(f"curl_command : {curl_command}")
    run_curl_command(curl_command, max_attempts)

init()

So i am scrapping the nse india results calendar site via using curl command and even after that its giving me "Resource not Found" error . Here's my code

url = f"https://www.nseindia.com/api/event-calendar?index=equities"
header1 ="Host:www.nseindia.com"
header2 = "User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:82.0) Gecko/20100101 Firefox/82.0"
header3 ="Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
header4 ="Accept-Language:en-US,en;q=0.5"
header5 ="Accept-Encoding:gzip, deflate, br"
header6 ="DNT:1"
header7 ="Connection:keep-alive"
header8 ="Upgrade-Insecure-Requests:1"
header9 ="Pragma:no-cache"
header10 ="Cache-Control:no-cache"

    def run_curl_command(curl_command, max_attempts):
        result = os.popen(curl_command).read()
        count = 0
        while "Resource not found" in result and  count < max_attempts:
            result = os.popen(curl_command).read()
            count += 1
            time.sleep(1)
        print("API Read")
        result = json.loads(result)
        result = pd.DataFrame(result)

def init():
    max_attempts = 100
    curl_command = f'curl "{url}" -H "{header1}" -H "{header2}" -H "{header3}" -H "{header4}" -H "{header5}" -H "{header6}" -H "{header7}" -H "{header8}" -H "{header9}" -H "{header10}" --compressed '
    print(f"curl_command : {curl_command}")
    run_curl_command(curl_command, max_attempts)

init()

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

不疑不惑不回忆 2025-02-05 15:16:37

使用此处记录的nsefetch()函数 https://unofficed.com/nse-com.com/nse-python/文档/nsefetch/

如果您想要python-requests方法

from nsepython import *
positions = nsefetch('https://www.nseindia.com/api/event-calendar?index=equities')
print(positions )

,以防万一想要curl方法

from nsepythonserver import *
positions = nsefetch('https://www.nseindia.com/api/event-calendar?index=equities')
print(positions )

nse nse如果您直接访问链接或您要去那里访问主页后。需要使用cookie模仿。

Use the nsefetch() function as documented here https://unofficed.com/nse-python/documentation/nsefetch/

In case you want python-requests method

from nsepython import *
positions = nsefetch('https://www.nseindia.com/api/event-calendar?index=equities')
print(positions )

In case you want curl method

from nsepythonserver import *
positions = nsefetch('https://www.nseindia.com/api/event-calendar?index=equities')
print(positions )

NSE checked if you are directly accessing the link or you are coming there after visiting the main page. That needs to be mimicked using cookies.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文