我目前正在尝试开发自己的“自动”交易期刊。我从bybit API( https:// bybit-exchange。 github.io/docs/inverse/#t-indroduction )我使用pybit( https://github.com/verata-veritatis/pybit/pybit )lib lib lib以连接到Bybit API。
我正在尝试拉开闭合的p& l位置()
我能够通过一些Python代码连接到BYBIT API。
现在让我描述我遇到的问题:
API请求仅限于每页50个结果。
如何迭代所有页面并将其保存在一个JSON文件中?
这是我当前正在使用的代码:
import pybit as pybit
from pybit import inverse_perpetual
session_unauth = inverse_perpetual.HTTP(
endpoint="https://api-testnet.bybit.com"
)
session_auth = inverse_perpetual.HTTP(
endpoint="https://api.bybit.com",
api_key="",
api_secret=""
)
data = session_auth.closed_profit_and_loss(symbol="BTCUSD", limit=50)
import json
with open('journal.json', 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=4)
import pandas as pd
df = pd.read_json(r"C:\Users\Work\PycharmProjects\pythonProject\journal.json")
df.to_csv (r"C:\Users\Work\PycharmProjects\pythonProject\journal.csv", index = None)
我将API_KEY和API_SECRET空了,因为这是机密信息。
I am currently trying to develop my own "automated" trading journal. I get the data from the bybit api (https://bybit-exchange.github.io/docs/inverse/#t-introduction) I use the pybit (https://github.com/verata-veritatis/pybit) lib to connect to the bybit API.
I am trying to pull the closed p&l positions (https://bybit-exchange.github.io/docs/inverse/#t-closedprofitandloss)
I was able to connect to the bybit API via some python code.
Now let me describe the problem I am having:
The API request is limited to 50 results PER PAGE.
How can I iterate through all the pages and save this in ONE JSON file?
This is the code I am currently using:
import pybit as pybit
from pybit import inverse_perpetual
session_unauth = inverse_perpetual.HTTP(
endpoint="https://api-testnet.bybit.com"
)
session_auth = inverse_perpetual.HTTP(
endpoint="https://api.bybit.com",
api_key="",
api_secret=""
)
data = session_auth.closed_profit_and_loss(symbol="BTCUSD", limit=50)
import json
with open('journal.json', 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=4)
import pandas as pd
df = pd.read_json(r"C:\Users\Work\PycharmProjects\pythonProject\journal.json")
df.to_csv (r"C:\Users\Work\PycharmProjects\pythonProject\journal.csv", index = None)
I left the api_key and api_secret empty because this is confidential information.
发布评论
评论(1)
处理分页时,有一个参数可以用来告诉您的服务器或API,以便为您提供下一个n个项目。
访问您提供的链接斑点有一个称为
page
的参数,您可以与请求一起发送整数。这再次限制为50页,之后,您可能会尝试使用start_time
或end_time
,我怀疑它甚至可以访问较旧的记录。愉快的编码。
When dealing with pagination there is a parameter one can use to tell the server or API in your case to give you the next N items.
By visiting the link you provided to the API documentation, one can spot there is a parameter called
page
which is an integer you can send along with the request. This is again limited to 50 pages, after that you might try to play withstart_time
orend_time
which I suspect could provide access to even older records.Happy coding.