将 Dask Dataframe 中的列拆分为 n 列

发布于 2025-01-14 15:20:56 字数 1195 浏览 2 评论 0原文

Dask 数据框中的一列中,我有这样的字符串:

column_name_1column_name_2
a^b^cj
e^f^gk^l
h^im

我需要将这些字符串拆分为以下列: 数据框,就像

column_name_1column_name_2column_name_1_1column_name_1_2column_name_1_3column_name_2_1column_name_2_2ab^c相同这样
^jabcj
e^f^gk^lefgkl
h^imhim

如果事先不知道数据中出现了多少次分隔符,我无法弄清楚如何执行此操作。此外,数据框中有数十列需要单独保留,因此我需要能够像这样指定要拆分的列。

我最大的努力要么包括类似的东西

df[["column_name_1_1","column_name_1_2 ","column_name_1_3"]] = df["column_name_1"].str.split('^',n=2, expand=True)

但它失败了

ValueError:计算数据中的列与提供的元数据中的列不匹配

In a column in a Dask Dataframe, I have strings like this:

column_name_1column_name_2
a^b^cj
e^f^gk^l
h^im

I need to split these strings into columns in the same data frame, like this

column_name_1column_name_2column_name_1_1column_name_1_2column_name_1_3column_name_2_1column_name_2_2
a^b^cjabcj
e^f^gk^lefgkl
h^imhim

I cannot figure out how to do this without knowing in advance how many occurrences of the delimiter there are in the data. Also, there are tens of columns in the Dataframe that are to be left alone, so I need to be able to specify which columns to split like this.

My best effort either includes something like

df[["column_name_1_1","column_name_1_2 ","column_name_1_3"]] = df["column_name_1"].str.split('^',n=2, expand=True)

But it fails with a

ValueError: The columns in the computed data do not match the columns in the provided metadata

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

我不会写诗 2025-01-21 15:20:56

这里有 2 个解决方案,无需 stack,但对选定的列名称使用循环:

cols = ['column_name_1','column_name_2']
for c in cols:
    df = df.join(df[c].str.split('^',n=2, expand=True).add_prefix(f'{c}_').fillna(''))

print (df)
  column_name_1 column_name_2 column_name_1_0 column_name_1_1 column_name_1_2  \
0         a^b^c             j               a               b               c   
1         e^f^g           k^l               e               f               g   
2           h^i             m               h               i                   

  column_name_2_0 column_name_2_1  
0               j                  
1               k               l  
2               m                  

或者修改另一个解决方案:

cols = ['column_name_1','column_name_2']
dfs = [df[c].str.split('^',n=2, expand=True).add_prefix(f'{c}_').fillna('') for c in cols]
df = pd.concat([df] + dfs, axis=1)
print (df)
  column_name_1 column_name_2 column_name_1_0 column_name_1_1 column_name_1_2  \
0         a^b^c             j               a               b               c   
1         e^f^g           k^l               e               f               g   
2           h^i             m               h               i                   

  column_name_2_0 column_name_2_1  
0               j                  
1               k               l  
2               m                  

Here are 2 solutions working without stack but with loop for selected columns names:

cols = ['column_name_1','column_name_2']
for c in cols:
    df = df.join(df[c].str.split('^',n=2, expand=True).add_prefix(f'{c}_').fillna(''))

print (df)
  column_name_1 column_name_2 column_name_1_0 column_name_1_1 column_name_1_2  \
0         a^b^c             j               a               b               c   
1         e^f^g           k^l               e               f               g   
2           h^i             m               h               i                   

  column_name_2_0 column_name_2_1  
0               j                  
1               k               l  
2               m                  

Or modify another solution:

cols = ['column_name_1','column_name_2']
dfs = [df[c].str.split('^',n=2, expand=True).add_prefix(f'{c}_').fillna('') for c in cols]
df = pd.concat([df] + dfs, axis=1)
print (df)
  column_name_1 column_name_2 column_name_1_0 column_name_1_1 column_name_1_2  \
0         a^b^c             j               a               b               c   
1         e^f^g           k^l               e               f               g   
2           h^i             m               h               i                   

  column_name_2_0 column_name_2_1  
0               j                  
1               k               l  
2               m                  
ㄖ落Θ余辉 2025-01-21 15:20:56

不幸的是使用 dask.dataframe.Series Dask 尚不支持带有 expand=True 的 .str.split 和未知数量的分割,以下返回 NotImplementedError

import dask.dataframe as dd
import pandas as pd

ddf = dd.from_pandas(
    pd.DataFrame({
        'column_name_1': ['a^b^c', 'e^f^g', 'h^i'], 'column_name_2': ['j', 'k^l', 'm']
    }), npartitions=2
)

# returns NotImplementedError
ddf['column_name_1'].str.split('^', expand=True).compute()

通常当一个pandas 等效项尚未在 Dask 中实现,map_partitions 可用于在每个 DataFrame 分区上应用 Python 函数。然而,在这种情况下,Dask 仍然需要知道需要多少列才能延迟生成 Dask DataFrame,如 meta 参数。这使得使用 Dask 来完成这项任务具有挑战性。相关地,发生 ValueError 是因为 column_name_2 仅需要 1 次分割,并返回具有 2 列的 Dask DataFrame,但 Dask 期望具有 3 列的 DataFrame。

如果您提前知道分割数,这里有一个解决方案(根据 @Fontanka16 的答案构建):

import dask.dataframe as dd
import pandas as pd

ddf = dd.from_pandas(
    pd.DataFrame({
        'column_name_1': ['a^b^c', 'e^f^g', 'h^i'], 'column_name_2': ['j', 'k^l', 'm']
    }), npartitions=2
)

ddf_list = []
num_split_dict = {'column_name_1': 2, 'column_name_2': 1}
for col, num_splits in num_split_dict.items():
    split_df = ddf[col].str.split('^', n=num_splits, expand=True).add_prefix(f'{col}_')
    ddf_list.append(split_df)
new_ddf = dd.concat([ddf] + ddf_list, axis=1)
new_ddf.compute()

Unfortunately using dask.dataframe.Series.str.split with expand=True and an unknown number of splits is not yet supported in Dask, the following returns a NotImplementedError:

import dask.dataframe as dd
import pandas as pd

ddf = dd.from_pandas(
    pd.DataFrame({
        'column_name_1': ['a^b^c', 'e^f^g', 'h^i'], 'column_name_2': ['j', 'k^l', 'm']
    }), npartitions=2
)

# returns NotImplementedError
ddf['column_name_1'].str.split('^', expand=True).compute()

Usually when a pandas equivalent has not yet been implemented in Dask, map_partitions can be used to apply a Python function on each DataFrame partition. In this case, however, Dask would still need to know how many columns to expect in order to lazily produce a Dask DataFrame, as provided with a meta argument. This makes using Dask for this task challenging. Relatedly, the ValueError occurs because column_name_2 requires only 1 split, and returns a Dask DataFrame with 2 columns, but Dask is expecting a DataFrame with 3 columns.

Here is one solution (building from @Fontanka16's answer) if you do know the number of splits ahead of time:

import dask.dataframe as dd
import pandas as pd

ddf = dd.from_pandas(
    pd.DataFrame({
        'column_name_1': ['a^b^c', 'e^f^g', 'h^i'], 'column_name_2': ['j', 'k^l', 'm']
    }), npartitions=2
)

ddf_list = []
num_split_dict = {'column_name_1': 2, 'column_name_2': 1}
for col, num_splits in num_split_dict.items():
    split_df = ddf[col].str.split('^', n=num_splits, expand=True).add_prefix(f'{col}_')
    ddf_list.append(split_df)
new_ddf = dd.concat([ddf] + ddf_list, axis=1)
new_ddf.compute()
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文