Python-迭代API连接的所有可用页面

发布于 2025-02-13 21:50:32 字数 1506 浏览 2 评论 0 原文

我目前正在尝试开发自己的“自动”交易期刊。我从bybit API( https:// bybit-exchange。 github.io/docs/inverse/#t-indroduction )我使用pybit( https://github.com/verata-veritatis/pybit/pybit )lib lib lib以连接到Bybit API。 我正在尝试拉开闭合的p& l位置(

我能够通过一些Python代码连接到BYBIT API。

现在让我描述我遇到的问题: API请求仅限于每页50个结果

如何迭代所有页面并将其保存在一个JSON文件中?

这是我当前正在使用的代码:

import pybit as pybit

from pybit import inverse_perpetual
session_unauth = inverse_perpetual.HTTP(
    endpoint="https://api-testnet.bybit.com"
)

session_auth = inverse_perpetual.HTTP(
    endpoint="https://api.bybit.com",
    api_key="",
    api_secret=""

)
data = session_auth.closed_profit_and_loss(symbol="BTCUSD", limit=50)

import json
with open('journal.json', 'w', encoding='utf-8') as f:
    json.dump(data, f, ensure_ascii=False, indent=4)

import pandas as pd
df  = pd.read_json(r"C:\Users\Work\PycharmProjects\pythonProject\journal.json")
df.to_csv (r"C:\Users\Work\PycharmProjects\pythonProject\journal.csv", index = None)

我将API_KEY和API_SECRET空了,因为这是机密信息。

I am currently trying to develop my own "automated" trading journal. I get the data from the bybit api (https://bybit-exchange.github.io/docs/inverse/#t-introduction) I use the pybit (https://github.com/verata-veritatis/pybit) lib to connect to the bybit API.
I am trying to pull the closed p&l positions (https://bybit-exchange.github.io/docs/inverse/#t-closedprofitandloss)

I was able to connect to the bybit API via some python code.

Now let me describe the problem I am having:
The API request is limited to 50 results PER PAGE.

How can I iterate through all the pages and save this in ONE JSON file?

This is the code I am currently using:

import pybit as pybit

from pybit import inverse_perpetual
session_unauth = inverse_perpetual.HTTP(
    endpoint="https://api-testnet.bybit.com"
)

session_auth = inverse_perpetual.HTTP(
    endpoint="https://api.bybit.com",
    api_key="",
    api_secret=""

)
data = session_auth.closed_profit_and_loss(symbol="BTCUSD", limit=50)

import json
with open('journal.json', 'w', encoding='utf-8') as f:
    json.dump(data, f, ensure_ascii=False, indent=4)

import pandas as pd
df  = pd.read_json(r"C:\Users\Work\PycharmProjects\pythonProject\journal.json")
df.to_csv (r"C:\Users\Work\PycharmProjects\pythonProject\journal.csv", index = None)

I left the api_key and api_secret empty because this is confidential information.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

我的影子我的梦 2025-02-20 21:50:32

处理分页时,有一个参数可以用来告诉您的服务器或API,以便为您提供下一个n个项目。

访问您提供的链接斑点有一个称为 page 的参数,您可以与请求一起发送整数。这再次限制为50页,之后,您可能会尝试使用 start_time end_time ,我怀疑它甚至可以访问较旧的记录。

愉快的编码。

When dealing with pagination there is a parameter one can use to tell the server or API in your case to give you the next N items.

By visiting the link you provided to the API documentation, one can spot there is a parameter called page which is an integer you can send along with the request. This is again limited to 50 pages, after that you might try to play with start_time or end_time which I suspect could provide access to even older records.

Happy coding.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文