使用熊猫和sqlalchemy将大桌从一个红移DB复制到另一个红移DB
因此,我试图从一个数据库中复制适中的架构。 (复制我需要在同一集群或服务器中的另一个数据库上复制此架构)
复制良好。 但是,在移动其中一个(95536记录)时。我得到了一个XX000磁盘完整
错误。
因此,我运行了vacuum()
,并且副本越过了故障点,但在以后的时间再次出现相同的错误。
方法IM使用
from sqlalchemy import create_engine
# Create
# source_db_engine
# destination_db_engine
with source_db_engine.begin() as src_conn:
table = pandas.read_sql_table("table", "schema", con=src_conn)
table.to_csv("table.csv")
with destination_db_engine.begin() as dest_conn:
src_table = pandas.read_csv("table.csv")
src_table.to_sql_table("table", "schema",
index=False, if_exists='append',
con=dest_conn)
- 是否有一种方法可以通过自己的交易插入每个记录?
磁盘完整
的可能原因是什么,因为源和目标DBS完全相同?
So im trying to copy a moderately sized schema from one database. (Copy as in i need to replicate this schema on another database in the same cluster or server)
The small ones copied fine.
But while moving one of the large ones (95536 records). I got an XX000 Disk Full
error.
So i ran a VACUUM()
and the copy proceeded past the point of failure but failed again with the same error at a later time.
Method im using
from sqlalchemy import create_engine
# Create
# source_db_engine
# destination_db_engine
with source_db_engine.begin() as src_conn:
table = pandas.read_sql_table("table", "schema", con=src_conn)
table.to_csv("table.csv")
with destination_db_engine.begin() as dest_conn:
src_table = pandas.read_csv("table.csv")
src_table.to_sql_table("table", "schema",
index=False, if_exists='append',
con=dest_conn)
- Is there a way to do it such that each record can be inserted with its own transaction?
- What could be the possible reason for the
Disk Full
since the source and destination dbs are completely identical?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论