即使使用卷发也找不到NSE印度的资源
因此,我正在通过使用curl命令来取消NSE India结果日历网站,即使在此之后,它也会给我“找不到资源”错误。这是我的代码
url = f"https://www.nseindia.com/api/event-calendar?index=equities"
header1 ="Host:www.nseindia.com"
header2 = "User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:82.0) Gecko/20100101 Firefox/82.0"
header3 ="Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
header4 ="Accept-Language:en-US,en;q=0.5"
header5 ="Accept-Encoding:gzip, deflate, br"
header6 ="DNT:1"
header7 ="Connection:keep-alive"
header8 ="Upgrade-Insecure-Requests:1"
header9 ="Pragma:no-cache"
header10 ="Cache-Control:no-cache"
def run_curl_command(curl_command, max_attempts):
result = os.popen(curl_command).read()
count = 0
while "Resource not found" in result and count < max_attempts:
result = os.popen(curl_command).read()
count += 1
time.sleep(1)
print("API Read")
result = json.loads(result)
result = pd.DataFrame(result)
def init():
max_attempts = 100
curl_command = f'curl "{url}" -H "{header1}" -H "{header2}" -H "{header3}" -H "{header4}" -H "{header5}" -H "{header6}" -H "{header7}" -H "{header8}" -H "{header9}" -H "{header10}" --compressed '
print(f"curl_command : {curl_command}")
run_curl_command(curl_command, max_attempts)
init()
So i am scrapping the nse india results calendar site via using curl command and even after that its giving me "Resource not Found" error . Here's my code
url = f"https://www.nseindia.com/api/event-calendar?index=equities"
header1 ="Host:www.nseindia.com"
header2 = "User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:82.0) Gecko/20100101 Firefox/82.0"
header3 ="Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
header4 ="Accept-Language:en-US,en;q=0.5"
header5 ="Accept-Encoding:gzip, deflate, br"
header6 ="DNT:1"
header7 ="Connection:keep-alive"
header8 ="Upgrade-Insecure-Requests:1"
header9 ="Pragma:no-cache"
header10 ="Cache-Control:no-cache"
def run_curl_command(curl_command, max_attempts):
result = os.popen(curl_command).read()
count = 0
while "Resource not found" in result and count < max_attempts:
result = os.popen(curl_command).read()
count += 1
time.sleep(1)
print("API Read")
result = json.loads(result)
result = pd.DataFrame(result)
def init():
max_attempts = 100
curl_command = f'curl "{url}" -H "{header1}" -H "{header2}" -H "{header3}" -H "{header4}" -H "{header5}" -H "{header6}" -H "{header7}" -H "{header8}" -H "{header9}" -H "{header10}" --compressed '
print(f"curl_command : {curl_command}")
run_curl_command(curl_command, max_attempts)
init()
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
使用此处记录的nsefetch()函数 https://unofficed.com/nse-com.com/nse-python/文档/nsefetch/
如果您想要
python-requests
方法,以防万一想要
curl
方法nse nse如果您直接访问链接或您要去那里访问主页后。需要使用cookie模仿。
Use the nsefetch() function as documented here https://unofficed.com/nse-python/documentation/nsefetch/
In case you want
python-requests
methodIn case you want
curl
methodNSE checked if you are directly accessing the link or you are coming there after visiting the main page. That needs to be mimicked using cookies.