从 sqlalchemy 转储 csv

发布于 2024-09-04 04:19:22 字数 393 浏览 3 评论 0原文

出于某种原因,我想以 csv 文件的形式从数据库(sqlite3)转储表。我正在使用带有 elixir (基于 sqlalchemy)的 python 脚本来修改数据库。我想知道是否有任何方法可以将我使用的表转储为 csv。

我见过 sqlalchemy serializer 但它似乎不是我想。我做错了吗?我应该在关闭 sqlalchemy 会话后调用 sqlite3 python 模块 转储到文件吗?或者我应该使用自制的东西?

For some reason, I want to dump a table from a database (sqlite3) in the form of a csv file. I'm using a python script with elixir (based on sqlalchemy) to modify the database. I was wondering if there is any way to dump the table I use to csv.

I've seen sqlalchemy serializer but it doesn't seem to be what I want. Am I doing it wrong? Should I call the sqlite3 python module after closing my sqlalchemy session to dump to a file instead? Or should I use something homemade?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(9

灯角 2024-09-11 04:19:22

稍微修改 Peter Hansen 的答案,使用 SQLAlchemy 而不是原始数据库访问

import csv
outfile = open('mydump.csv', 'wb')
outcsv = csv.writer(outfile)
records = session.query(MyModel).all()
[outcsv.writerow([getattr(curr, column.name) for column in MyTable.__mapper__.columns]) for curr in records]
# or maybe use outcsv.writerows(records)

outfile.close()

Modifying Peter Hansen's answer here a bit, to use SQLAlchemy instead of raw db access

import csv
outfile = open('mydump.csv', 'wb')
outcsv = csv.writer(outfile)
records = session.query(MyModel).all()
[outcsv.writerow([getattr(curr, column.name) for column in MyTable.__mapper__.columns]) for curr in records]
# or maybe use outcsv.writerows(records)

outfile.close()
爱*していゐ 2024-09-11 04:19:22

有很多方法可以实现这一点,包括对 sqlite3 实用程序进行简单的 os.system() 调用(如果您安装了该实用程序),但这大致是我要做的Python:

import sqlite3
import csv

con = sqlite3.connect('mydatabase.db')
outfile = open('mydump.csv', 'wb')
outcsv = csv.writer(outfile)

cursor = con.execute('select * from mytable')

# dump column titles (optional)
outcsv.writerow(x[0] for x in cursor.description)
# dump rows
outcsv.writerows(cursor.fetchall())

outfile.close()

There are numerous ways to achieve this, including a simple os.system() call to the sqlite3 utility if you have that installed, but here's roughly what I'd do from Python:

import sqlite3
import csv

con = sqlite3.connect('mydatabase.db')
outfile = open('mydump.csv', 'wb')
outcsv = csv.writer(outfile)

cursor = con.execute('select * from mytable')

# dump column titles (optional)
outcsv.writerow(x[0] for x in cursor.description)
# dump rows
outcsv.writerows(cursor.fetchall())

outfile.close()
难理解 2024-09-11 04:19:22

我将上面的示例改编为基于 sqlalchemy 的代码,如下所示:

import csv
import sqlalchemy as sqAl

metadata = sqAl.MetaData()
engine = sqAl.create_engine('sqlite:///%s' % 'data.db')
metadata.bind = engine

mytable = sqAl.Table('sometable', metadata, autoload=True)
db_connection = engine.connect()

select = sqAl.sql.select([mytable])
result = db_connection.execute(select)

fh = open('data.csv', 'wb')
outcsv = csv.writer(fh)

outcsv.writerow(result.keys())
outcsv.writerows(result)

fh.close

这适用于 sqlalchemy 0.7.9。我想这适用于所有 sqlalchemy 表和结果对象。

I adapted the above examples to my sqlalchemy based code like this:

import csv
import sqlalchemy as sqAl

metadata = sqAl.MetaData()
engine = sqAl.create_engine('sqlite:///%s' % 'data.db')
metadata.bind = engine

mytable = sqAl.Table('sometable', metadata, autoload=True)
db_connection = engine.connect()

select = sqAl.sql.select([mytable])
result = db_connection.execute(select)

fh = open('data.csv', 'wb')
outcsv = csv.writer(fh)

outcsv.writerow(result.keys())
outcsv.writerows(result)

fh.close

This works for me with sqlalchemy 0.7.9. I suppose that this would work with all sqlalchemy table and result objects.

窗影残 2024-09-11 04:19:22

我知道这已经很旧了,但我刚刚遇到了这个问题,这就是我解决它的方法

import pandas as pd
from sqlalchemy import create_engine

basedir = os.path.abspath(os.path.dirname(__file__))
sql_engine = create_engine(os.path.join('sqlite:///' + os.path.join(basedir, 'single_file_app.db')), echo=False)
results = pd.read_sql_query('select * from users',sql_engine)
results.to_csv(os.path.join(basedir, 'mydump2.csv'),index=False,sep=";")

I know this is old, but i just had this problem and this is how i solved it

import pandas as pd
from sqlalchemy import create_engine

basedir = os.path.abspath(os.path.dirname(__file__))
sql_engine = create_engine(os.path.join('sqlite:///' + os.path.join(basedir, 'single_file_app.db')), echo=False)
results = pd.read_sql_query('select * from users',sql_engine)
results.to_csv(os.path.join(basedir, 'mydump2.csv'),index=False,sep=";")
岁吢 2024-09-11 04:19:22
with open('dump.csv', 'wb') as f:
    out = csv.writer(f)
    out.writerow(['id', 'description'])

    for item in session.query(Queue).all():
        out.writerow([item.id, item.description])

如果您不介意手工制作列标签,我发现这很有用。

with open('dump.csv', 'wb') as f:
    out = csv.writer(f)
    out.writerow(['id', 'description'])

    for item in session.query(Queue).all():
        out.writerow([item.id, item.description])

I found this to be useful if you don't mind hand-crafting your column labels.

网白 2024-09-11 04:19:22
import csv

f = open('ratings.csv', 'w')
out = csv.writer(f)
out.writerow(['id', 'user_id', 'movie_id', 'rating'])

for item in db.query.all():
    out.writerow([item.username, item.username, item.movie_name, item.rating])
f.close()
import csv

f = open('ratings.csv', 'w')
out = csv.writer(f)
out.writerow(['id', 'user_id', 'movie_id', 'rating'])

for item in db.query.all():
    out.writerow([item.username, item.username, item.movie_name, item.rating])
f.close()
腻橙味 2024-09-11 04:19:22

我花了很多时间寻找这个问题的解决方案,最后创建了这样的东西:

from sqlalchemy import inspect

with open(file_to_write, 'w') as file:
    out_csv = csv.writer(file, lineterminator='\n')

    columns = [column.name for column in inspect(Movies).columns][1:]
    out_csv.writerow(columns)

    session_3 = session_maker()

    extract_query = [getattr(Movies, col) for col in columns]
    for mov in session_3.query(*extract_query):
        out_csv.writerow(mov)

    session_3.close()

它创建一个包含列名称的 CSV 文件和整个“电影”表的转储,而没有“id”主列。

I spent a lot of time searching for a solution to this problem and finally created something like this:

from sqlalchemy import inspect

with open(file_to_write, 'w') as file:
    out_csv = csv.writer(file, lineterminator='\n')

    columns = [column.name for column in inspect(Movies).columns][1:]
    out_csv.writerow(columns)

    session_3 = session_maker()

    extract_query = [getattr(Movies, col) for col in columns]
    for mov in session_3.query(*extract_query):
        out_csv.writerow(mov)

    session_3.close()

It creates a CSV file with column names and a dump of the entire "movies" table without "id" primary column.

坐在坟头思考人生 2024-09-11 04:19:22

以模块化方式:使用 slqalchemy 与 automap 和 mysql 的示例。

database.py:export_to_csv.py

from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine

Base = automap_base()

engine = create_engine('mysql://user:pass@localhost:3306/database_name', echo=True)

Base.prepare(engine, reflect=True)

# Map the tables
State = Base.classes.states

session = Session(engine, autoflush=False)

from databases import *
import csv

def export():

    q = session.query(State)

    file = './data/states.csv'

    with open(file, 'w') as csvfile:
        outcsv = csv.writer(csvfile, delimiter=',',quotechar='"', quoting = csv.QUOTE_MINIMAL)

        header = State.__table__.columns.keys()

        outcsv.writerow(header)     

        for record in q.all():
            outcsv.writerow([getattr(record, c) for c in header ])

if __name__ == "__main__":
    export()

结果:

名称,abv,国家/地区,is_state,is_lower48,slug,纬度,经度,人口,面积
阿拉斯加州,AK,美国,y,n,阿拉斯加州,61.370716,-152.404419,710231,571951.25
阿拉巴马州,AL,美国,y,y,阿拉巴马州,32.806671,-86.79113,4779736,50744.0
阿肯色州,AR,美国,y,y,阿肯色州,34.969704,-92.373123,2915918,52068.17
亚利桑那州,AZ,美国,y,y,亚利桑那州,33.729759,-111.431221,6392017,113634.57
加利福尼亚州,CA,美国,y,y,加利福尼亚州,36.116203,-119.681564,37253956,155939.52
科罗拉多州,CO,美国,y,y,科罗拉多州,39.059811,-105.311104,5029196,103717.53
康涅狄格州,CT,美国,y,y,康涅狄格州,41.597782,-72.755371,3574097,4844.8
哥伦比亚特区,DC,美国,n,n,哥伦比亚特区,38.897438,-77.026817,601723,68.34
特拉华州,DE,美国,y,y,特拉华州,39.318523,-75.507141,897934,1953.56
佛罗里达州,FL,美国,y,y,佛罗里达州,27.766279,-81.686783,18801310,53926.82
佐治亚州,GA,美国,y,y,佐治亚州,33.040619,-83.643074,9687653,57906.14

In a modular way: an example using slqalchemy with automap and mysql.

database.py:

from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine

Base = automap_base()

engine = create_engine('mysql://user:pass@localhost:3306/database_name', echo=True)

Base.prepare(engine, reflect=True)

# Map the tables
State = Base.classes.states

session = Session(engine, autoflush=False)

export_to_csv.py:

from databases import *
import csv

def export():

    q = session.query(State)

    file = './data/states.csv'

    with open(file, 'w') as csvfile:
        outcsv = csv.writer(csvfile, delimiter=',',quotechar='"', quoting = csv.QUOTE_MINIMAL)

        header = State.__table__.columns.keys()

        outcsv.writerow(header)     

        for record in q.all():
            outcsv.writerow([getattr(record, c) for c in header ])

if __name__ == "__main__":
    export()

Results:

name,abv,country,is_state,is_lower48,slug,latitude,longitude,population,area
Alaska,AK,US,y,n,alaska,61.370716,-152.404419,710231,571951.25
Alabama,AL,US,y,y,alabama,32.806671,-86.79113,4779736,50744.0
Arkansas,AR,US,y,y,arkansas,34.969704,-92.373123,2915918,52068.17
Arizona,AZ,US,y,y,arizona,33.729759,-111.431221,6392017,113634.57
California,CA,US,y,y,california,36.116203,-119.681564,37253956,155939.52
Colorado,CO,US,y,y,colorado,39.059811,-105.311104,5029196,103717.53
Connecticut,CT,US,y,y,connecticut,41.597782,-72.755371,3574097,4844.8
District of Columbia,DC,US,n,n,district-of-columbia,38.897438,-77.026817,601723,68.34
Delaware,DE,US,y,y,delaware,39.318523,-75.507141,897934,1953.56
Florida,FL,US,y,y,florida,27.766279,-81.686783,18801310,53926.82
Georgia,GA,US,y,y,georgia,33.040619,-83.643074,9687653,57906.14

甜点 2024-09-11 04:19:22

使用 pandas + sqlalchemy 的简单方法

import os
import pandas as pd
from sqlalchemy import create_engine, select
from sqlalchemy import MetaData, Table
from pathlib import Path

def convert_to_csv(tablename, filename):
engine = create_engine('sqlite:///your-file.sqlite')
connection = engine.connect()

metadata = MetaData()
table = Table(tablename, metadata, autoload_with=engine)
stmt = select(table)
results = connection.execute(stmt).fetchall() # .fetchmany(size=10)

filepath = Path(filename)
filepath.parent.mkdir(parents=True, exist_ok=True)

df = pd.DataFrame(results)
df.to_csv(filepath, index=False)

print(f'\n

A simple way to do it using pandas + sqlalchemy

import os
import pandas as pd
from sqlalchemy import create_engine, select
from sqlalchemy import MetaData, Table
from pathlib import Path   

def convert_to_csv(tablename, filename):
    engine = create_engine('sqlite:///your-file.sqlite')
    connection = engine.connect()
    
    metadata = MetaData()
    table = Table(tablename, metadata, autoload_with=engine)
    stmt = select(table)
    results = connection.execute(stmt).fetchall() # .fetchmany(size=10)

    filepath = Path(filename)  
    filepath.parent.mkdir(parents=True, exist_ok=True)  

    df = pd.DataFrame(results)
    df.to_csv(filepath, index=False)

    print(f'\n???? data has exported successfully into {os.getcwd()}/{filepath}\n')


convert_to_csv('your-table-name', '../your-path/your-new-file.csv')
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文