使用 psycopg2 读取 csvData 的好方法

发布于 2024-09-15 16:43:35 字数 1190 浏览 16 评论 0原文

我正在尝试获得一种快速且无需大量代码的将 csv 数据获取到 postgres 数据库的方法。我正在使用 csvDictreader 读取 python,效果很好。然后我需要以某种方式生成代码来获取字典并将其放入表中。我想自动执行此操作,因为我的表通常有数百个变量。 (我不想直接读取 Postgres,因为在很多情况下我必须转换数据,而 python 对此很有用)

这是我所得到的一些内容:

import psycopg2
import sys
import  itertools

import sys, csv
import psycopg2.extras
import psycopg2.extensions

csvReader=csv.DictReader(open( '/home/matthew/Downloads/us_gis_data/statesp020.csv',  "rb"),  delimiter = ',')
#close.cursor()
x = 0
ConnectionString = "host='localhost' dbname='mydb' user='postgres' password='######"
try:
    connection = psycopg2.extras.DictConnection(ConnectionString)
    print "connecting"
except:
    print "did not work"
# Create a test table with some data

dict_cur = connection.cursor()

#dict_cur.execute("CREATE TABLE test (id serial PRIMARY KEY, num integer, data varchar);")
for i in range(1,50):
    x = x+1
    print x
    dict_cur.execute("INSERT INTO test (num, data) VALUES(%s, %s)",(x, 3.6))#"abc'def"))
   ### how to I create the table and insert value using the dictreader?

dict_cur.execute("SELECT * FROM test")
for k in range(0,x+1):
    rec = dict_cur.fetchone()
    print rec['num'], rec['data']

I am trying to get a fast i.e. fast and not a lot of code, way to get csv data into postgres data base. I am reading into python using csvDictreader which works fine. Then I need to generate code somehow that takes the dicts and puts it into a table. I want to do this automaticaly as my tables often have hundreds of variables. (I don't want to read directly to Postgres because in many cases I must transform the data and python is good for that)

This is some of what I have got:

import psycopg2
import sys
import  itertools

import sys, csv
import psycopg2.extras
import psycopg2.extensions

csvReader=csv.DictReader(open( '/home/matthew/Downloads/us_gis_data/statesp020.csv',  "rb"),  delimiter = ',')
#close.cursor()
x = 0
ConnectionString = "host='localhost' dbname='mydb' user='postgres' password='######"
try:
    connection = psycopg2.extras.DictConnection(ConnectionString)
    print "connecting"
except:
    print "did not work"
# Create a test table with some data

dict_cur = connection.cursor()

#dict_cur.execute("CREATE TABLE test (id serial PRIMARY KEY, num integer, data varchar);")
for i in range(1,50):
    x = x+1
    print x
    dict_cur.execute("INSERT INTO test (num, data) VALUES(%s, %s)",(x, 3.6))#"abc'def"))
   ### how to I create the table and insert value using the dictreader?

dict_cur.execute("SELECT * FROM test")
for k in range(0,x+1):
    rec = dict_cur.fetchone()
    print rec['num'], rec['data']

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

沫雨熙 2024-09-22 16:43:35

假设您有一个字段名称列表(大概您可以从 csv 文件的标题中获取此列表):

fieldnames = ['Name', 'Address', 'City', 'State']

假设它们都是 VARCHAR,您可以创建表“TableName”:

sql_table = 'CREATE TABLE TableName (%s)' % ','.join('%s VARCHAR(50)' % name for name in fieldnames)
cursor.execute(sql_table)

您可以从字典“dict”插入行:

sql_insert = ('INSERT INTO TableName (%s) VALUES (%s)' % 
              (','.join('%s' % name for name in fieldnames),
               ','.join('%%(%s)s' % name for name in fieldnames)))
cursor.execute(sql_insert, dict)

或者一次性完成,给定一个列表字典:

dictlist = [dict1, dict2, ...]
cursor.executemany(sql_insert, dictlist)

您可以根据字段的类型和 DictReader 的使用情况进行必要的调整。

Say you have a list of field names (presumably you can get this from the header of your csv file):

fieldnames = ['Name', 'Address', 'City', 'State']

Assuming they're all VARCHARs, you can create the table "TableName":

sql_table = 'CREATE TABLE TableName (%s)' % ','.join('%s VARCHAR(50)' % name for name in fieldnames)
cursor.execute(sql_table)

You can insert the rows from a dictionary "dict":

sql_insert = ('INSERT INTO TableName (%s) VALUES (%s)' % 
              (','.join('%s' % name for name in fieldnames),
               ','.join('%%(%s)s' % name for name in fieldnames)))
cursor.execute(sql_insert, dict)

Or do it in one go, given a list dictionaries:

dictlist = [dict1, dict2, ...]
cursor.executemany(sql_insert, dictlist)

You can adapt this as necessary based on the type of your fields and the use of DictReader.

谁的新欢旧爱 2024-09-22 16:43:35

我是新手,但这对我有用。我使用 PG Admin 创建“testCSV”表。

import psycopg2 as dbapi

con = dbapi.connect(database="testpg", user="postgres", password="secret")

cur = con.cursor()

import csv
csvObject = csv.reader(open(r'C:\testcsv.csv', 'r'), dialect = 'excel',  delimiter = ',') 

passData = "INSERT INTO testCSV (param1, param2, param3, param4, param5) VALUES (%s,%s,%s,%s,%s);" 

for row in csvObject:  
    csvLine = row       
    cur.execute(passData, csvLine) 

con.commit()

I am a novice but this worked for me. I used PG Admin to create the 'testCSV' table.

import psycopg2 as dbapi

con = dbapi.connect(database="testpg", user="postgres", password="secret")

cur = con.cursor()

import csv
csvObject = csv.reader(open(r'C:\testcsv.csv', 'r'), dialect = 'excel',  delimiter = ',') 

passData = "INSERT INTO testCSV (param1, param2, param3, param4, param5) VALUES (%s,%s,%s,%s,%s);" 

for row in csvObject:  
    csvLine = row       
    cur.execute(passData, csvLine) 

con.commit()
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文