尝试使用Python将CSV文件转换为Parquet的记忆中

发布于 2025-02-04 06:58:57 字数 314 浏览 2 评论 0原文

我正在尝试将一个非常大的CSV文件转换为Parquet。

我尝试了以下方法:

df1 = pd.read_csv('/kaggle/input/amex-default-prediction/train_data.csv')
df1.to_parquet('/kaggle/input/amex-default-prediction/train.parquet')

但是pd.read._csv thrws 从内存错误

是否有任何方法可以转换为文件而不完全加载文件?

I am trying to convert a very large csv file to parquet.

I have tried the following method:

df1 = pd.read_csv('/kaggle/input/amex-default-prediction/train_data.csv')
df1.to_parquet('/kaggle/input/amex-default-prediction/train.parquet')

but pd.read_csv throws Out Of Memory Error

Is there any way to convert to the file without loading it entirely ?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

等风也等你 2025-02-11 06:58:57

为了解决内存问题,您可以首先使用大熊猫的chunck方法导入数据,然后将每个chunck保存为镶木quet文件。因此,例如,对于您的情况,请创建一个文件夹“ train_data”,在此文件夹中,您可以保存与Chuncks相对应的不同的Parquet文件。

import pandas as pd
import numpy as np
import os
import csv2parquet
from subprocess import run
import fastparquet
import sys
import pyarrow.parquet as pq

path ="C:/.../amex-default-prediction/"
parquet="parquet/"
#create new folder train_data
path_train_data="train_data/"


def get_path_parquet(file):
    if file.split('.')[0]=="sample_submission":
        return path_sample_submission
    elif file.split('.')[0]=="test_data":
        return path_test_data
    elif file.split('.')[0]=="train_data":
        return path_train_data
    elif file.split('.')[0]=="train_labels":
        return path_train_label
def csv_to_parquet(df,title, path,i):
    """
    Convert Csv files to parquet
    df    : csv data
    title : name data
    path  : folder into the save parquet data
    """
    try:
        title_prefix=title.split(".")[0]+str(i)
        out_title = path + f'\\{title_prefix}.parquet'
        df.to_parquet(out_title, engine='fastparquet')
    except:
        sys.exit(-1)
def loding_csv_with_chunk(path,file):
    try:
        chunk_csv= pd.read_csv(path + f'\\{file}', low_memory=False, chunksize = 5000)
        #df = pd.concat(chunk for chunk in chunk_csv)
        return chunk_csv
    except:
        sys.exit(-1)
        

def read_partition_parquet():
    dataset = pq.ParquetDataset(path_train_, use_legacy_dataset=False)
    data=dataset.read().to_pandas()
    return data


#csv_df
for file in os.listdir(path):
    if file[-4:]==".csv":
        
        print("begin process for : "+str(file)+ "....")
        #csv_df = pd.read_csv(path + f'\\{file}')
        ##load data with chunck method
        chunk_csv = loding_csv_with_chunk(path,file)
        ##for each chunck save the data on parquet format 
        for i, df_chunk in enumerate(chunk_csv):
            print(df_chunk.shape)
            title_prefix=file.split(".")[0]+str(i)
            out_title = path+parquet+get_path_parquet(file) + f'{title_prefix}.parquet'
            df_chunk.to_parquet(out_title, engine='fastparquet')
        #csv_to_parquet(csv_df,file, path)
        print("end process for : "+str(file)+ "....")
    else:
        continue

To solve the memory problem, you can first import the data with the chunck method of pandas and save each chunck as a parquet file. So for example for your case, create a folder "train_data", and in this folder you save the different parquet files that correspond to the chuncks.

import pandas as pd
import numpy as np
import os
import csv2parquet
from subprocess import run
import fastparquet
import sys
import pyarrow.parquet as pq

path ="C:/.../amex-default-prediction/"
parquet="parquet/"
#create new folder train_data
path_train_data="train_data/"


def get_path_parquet(file):
    if file.split('.')[0]=="sample_submission":
        return path_sample_submission
    elif file.split('.')[0]=="test_data":
        return path_test_data
    elif file.split('.')[0]=="train_data":
        return path_train_data
    elif file.split('.')[0]=="train_labels":
        return path_train_label
def csv_to_parquet(df,title, path,i):
    """
    Convert Csv files to parquet
    df    : csv data
    title : name data
    path  : folder into the save parquet data
    """
    try:
        title_prefix=title.split(".")[0]+str(i)
        out_title = path + f'\\{title_prefix}.parquet'
        df.to_parquet(out_title, engine='fastparquet')
    except:
        sys.exit(-1)
def loding_csv_with_chunk(path,file):
    try:
        chunk_csv= pd.read_csv(path + f'\\{file}', low_memory=False, chunksize = 5000)
        #df = pd.concat(chunk for chunk in chunk_csv)
        return chunk_csv
    except:
        sys.exit(-1)
        

def read_partition_parquet():
    dataset = pq.ParquetDataset(path_train_, use_legacy_dataset=False)
    data=dataset.read().to_pandas()
    return data


#csv_df
for file in os.listdir(path):
    if file[-4:]==".csv":
        
        print("begin process for : "+str(file)+ "....")
        #csv_df = pd.read_csv(path + f'\\{file}')
        ##load data with chunck method
        chunk_csv = loding_csv_with_chunk(path,file)
        ##for each chunck save the data on parquet format 
        for i, df_chunk in enumerate(chunk_csv):
            print(df_chunk.shape)
            title_prefix=file.split(".")[0]+str(i)
            out_title = path+parquet+get_path_parquet(file) + f'{title_prefix}.parquet'
            df_chunk.to_parquet(out_title, engine='fastparquet')
        #csv_to_parquet(csv_df,file, path)
        print("end process for : "+str(file)+ "....")
    else:
        continue

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文