将 Python 字典插入 Postgres
我有一个具有以下形状的字典:
{0: {'a': 'cat',
'b': 'dog',
'c': 'bus',
'd': 'snake'},
{1: {'a': 'ddsdf',
'b': 'sdfs',
'c': 'asdfsd',
'd': 'asdfds'},
}
placeholders = ', '.join(['%s'] * len(df_merge.columns))
columns = ', '.join(df_merge.columns)
df_merge_reset_index = df_merge.reset_index(drop=True)
merge_to_dict = df_merge_reset_index.to_dict('dict')
merge_to_dict_df = pd.DataFrame.from_dict(merge_to_dict, orient='index')
dict_ = merge_to_dict_df.to_dict('dict')
sql = """INSERT INTO test ({}) VALUES ({}) ON CONFLICT DO NOTHING;""" \
.format(columns_, placeholders_)
当前,我使用的是for loop,
for i in dict_.values():
cursor.execute(sql,
(i['a'], i['b'], i['c'], i['d'])
)
但这非常慢,我想从psycopg2.2.extras import execute_values 或中使用
code>。 。任何人都会对如何转换我的数据有建议,以便可以将其与
execute_values
或execute_batch
一起使用?
----更新:
我通过将结构保留为记录,将结构更改为更近的东西。
df_merge_reset_index = df_merge_.reset_index(drop=True)
merge_to_dict = df_merge_reset_index.to_dict('records')
[{'a': '',
'b': '',
'c': ',
'd': ''}]
cursor.executeMany(sql,merged_dict)
我得到以下错误cursor.executemany(sql,merged_dict)typeerror:dict不是序列
I have a dictionary that has the following shape:
{0: {'a': 'cat',
'b': 'dog',
'c': 'bus',
'd': 'snake'},
{1: {'a': 'ddsdf',
'b': 'sdfs',
'c': 'asdfsd',
'd': 'asdfds'},
}
placeholders = ', '.join(['%s'] * len(df_merge.columns))
columns = ', '.join(df_merge.columns)
df_merge_reset_index = df_merge.reset_index(drop=True)
merge_to_dict = df_merge_reset_index.to_dict('dict')
merge_to_dict_df = pd.DataFrame.from_dict(merge_to_dict, orient='index')
dict_ = merge_to_dict_df.to_dict('dict')
sql = """INSERT INTO test ({}) VALUES ({}) ON CONFLICT DO NOTHING;""" \
.format(columns_, placeholders_)
Currently, I am using a for loop
for i in dict_.values():
cursor.execute(sql,
(i['a'], i['b'], i['c'], i['d'])
)
But this is very slow, and I would like to use either from psycopg2.extras import execute_values
or from psycopg2.extras import execute_batch
but the way my dictionary is setup does not allow this. Would anyone have a recommendation on how to transform my data so I can use it with execute_values
or execute_batch
?
---- UPDATE:
I changed the structure to something thats a little closer by keeping it as a record.
df_merge_reset_index = df_merge_.reset_index(drop=True)
merge_to_dict = df_merge_reset_index.to_dict('records')
[{'a': '',
'b': '',
'c': ',
'd': ''}]
cursor.executemany(sql, merged_dict)
I get the following error cursor.executemany(sql, merged_dict) TypeError: dict is not a sequence
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
如果您要插入很多行,请以2个步骤进行操作,而不是执行许多和其他人。即使有很多执行,您也必须在2中完成多个查询。
您只需将整个DF加载到一个查询中而不必担心冲突,然后您就可以在另一个查询中将其传输。
先决条件:
:
复制
将CSV与cursor.copy_from(csv)
https://www.psycopg.org.org/docs/docs/docs/cursor.html.html
If you have lots of rows you want to insert, do it in 2 steps instead of execute many and others. Even with execute many you'd have to be sending multiple queries when you can just be done in 2.
You can just load the entire df in a single query without worrying about the conflicts, and then you can just transfer it in another query.
Prerequisites:
Steps:
COPY
the csv to intermediate table withcursor.copy_from(csv)
https://www.psycopg.org/docs/cursor.html