我可以将清洁的DF保存到目标目录

发布于 2025-01-22 21:16:04 字数 854 浏览 5 评论 0原文

我正在尝试从大文件中删除重复项,但将其保存到其他目录中。我在下面运行了代码,但它将它们保存在根目录中。我知道,如果我切换到 inplace ='false'它不会在根目录中覆盖这些文件,但也不会将它们复制到目标目录中,因此不会帮助。

请建议,谢谢! :)

import os
import pandas as pd
from glob import glob
import csv
from pathlib import Path

root = Path(r'C:\my root directory') 
target = Path(r'C:\my root directory\target')
file_list = root.glob("*.csv")

desired_columns = ['ZIP', 'COUNTY', 'COUNTYID']

for csv_file in file_list:
    df = pd.read_csv(csv_file)
    df.drop_duplicates(subset=desired_columns, keep="first", inplace=True)
    df.to_csv(os.path.join(target,csv_file))

例子:

ZIP COUNTYID    COUNTY
32609   1   ALACHUA
32609   1   ALACHUA
32666   1   ALACHUA
32694   1   ALACHUA
32694   1   ALACHUA
32694   1   ALACHUA
32666   1   ALACHUA
32666   1   ALACHUA
32694   1   ALACHUA

I am trying to remove duplicates from large files, but save those into a different directory. I ran the code below, but it saved them (overwrote) within the root directory. I know that if I switch to inplace='False' it won't overwrite those files in the root directory, but it also doesn't copy them into the target directory either, so that doesn't help.

Please advise and thank you! :)

import os
import pandas as pd
from glob import glob
import csv
from pathlib import Path

root = Path(r'C:\my root directory') 
target = Path(r'C:\my root directory\target')
file_list = root.glob("*.csv")

desired_columns = ['ZIP', 'COUNTY', 'COUNTYID']

for csv_file in file_list:
    df = pd.read_csv(csv_file)
    df.drop_duplicates(subset=desired_columns, keep="first", inplace=True)
    df.to_csv(os.path.join(target,csv_file))

Example:

ZIP COUNTYID    COUNTY
32609   1   ALACHUA
32609   1   ALACHUA
32666   1   ALACHUA
32694   1   ALACHUA
32694   1   ALACHUA
32694   1   ALACHUA
32666   1   ALACHUA
32666   1   ALACHUA
32694   1   ALACHUA

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

凶凌 2025-01-29 21:16:04

这应该有效,同时还可以减少依赖关系:

import pandas as pd
import pathlib

root = pathlib.Path(r"C:\my root directory") 
target = root / "target"
file_list = root.glob("*.csv")

desired_columns = ["ZIP", "COUNTY", "COUNTYID"]
for csv_file in file_list:
    df = pd.read_csv(csv_file)
    df.drop_duplicates(subset=desired_columns, keep="first", inplace=True)
    df.to_csv(target / csv_file.name)

请注意,由于TARGET相对于您的根目录,因此您可以使用/运算符加入。

This should work, while also reducing your dependencies:

import pandas as pd
import pathlib

root = pathlib.Path(r"C:\my root directory") 
target = root / "target"
file_list = root.glob("*.csv")

desired_columns = ["ZIP", "COUNTY", "COUNTYID"]
for csv_file in file_list:
    df = pd.read_csv(csv_file)
    df.drop_duplicates(subset=desired_columns, keep="first", inplace=True)
    df.to_csv(target / csv_file.name)

Note that since target is relative to your root directory, you can simply join using the / operator.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文