当值相同时,pivot_table中有不需要的平均值

发布于 2025-02-07 12:15:59 字数 958 浏览 5 评论 0原文

当有相同的值时,pivot_table表示平均 (默认情况下,原因aggfunc ='emane'

例如:

d=pd.DataFrame(data={
'x_values':[13.4,13.08,12.73,12.,33.,23.,12.],
'y_values': [1.54, 1.47,1.,2.,4.,4.,3.],
'experiment':['e', 'e', 'e', 'f', 'f','f','f']})
print(pd.pivot_table(d, index='x_values', 
        columns='experiment', values='y_values',sort=False))

返回:

experiment     e    f
x_values             
13.40       1.54  NaN
13.08       1.47  NaN
12.73       1.00  NaN
12.00        NaN  2.5
33.00        NaN  4.0
23.00        NaN  4.0

您可以在f中看到一个新值(2.5是2。和3的平均值)。

但是我想保留我的熊猫,

experiment     e    f
x_values             
13.40       1.54  NaN
13.08       1.47  NaN
12.73       1.00  NaN
12.00        NaN  2.0
33.00        NaN  4.0
23.00        NaN  4.0
12.00        NaN  3.0

我该怎么做?
我试图玩aggfunc = list,然后是爆炸,但是在这种情况下,订单丢失了...

谢谢

When there are identical values pivot_table takes the mean
(cause aggfunc='mean' by default)

For instance:

d=pd.DataFrame(data={
'x_values':[13.4,13.08,12.73,12.,33.,23.,12.],
'y_values': [1.54, 1.47,1.,2.,4.,4.,3.],
'experiment':['e', 'e', 'e', 'f', 'f','f','f']})
print(pd.pivot_table(d, index='x_values', 
        columns='experiment', values='y_values',sort=False))

returns:

experiment     e    f
x_values             
13.40       1.54  NaN
13.08       1.47  NaN
12.73       1.00  NaN
12.00        NaN  2.5
33.00        NaN  4.0
23.00        NaN  4.0

As you can see a new value in f appears (2.5 which is the mean of 2. and 3).

But I want to keep the list as it was in my pandas

experiment     e    f
x_values             
13.40       1.54  NaN
13.08       1.47  NaN
12.73       1.00  NaN
12.00        NaN  2.0
33.00        NaN  4.0
23.00        NaN  4.0
12.00        NaN  3.0

How can I do it ?
I have tried to play with aggfunc=list followed by an explode but in this case the order is lost ...

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

非要怀念 2025-02-14 12:15:59

这是我的解决方案。您真的不想在 pivot x_values(因为有dupes)。因此,添加一个新的唯一列(ID_COL),并在x_valuesid_col上添加枢轴。然后,您必须进行一些清理:

(d
 .assign(id_col=range(len(d)))
 .pivot(index=['x_values', 'id_col'], columns='experiment')
 .reset_index()
 .drop(columns='id_col')
 .set_index('x_values')
)

这是输出:

           y_values     
experiment        e    f
x_values                
12.00           NaN  2.0
12.00           NaN  3.0
12.73          1.00  NaN
13.08          1.47  NaN
13.40          1.54  NaN
23.00           NaN  4.0
33.00           NaN  4.0

Here's my solution. You don't really want to pivot on x_values (because there are dupes). So add a new unique column (id_col) and pivot on both x_values and id_col. Then you will have to do some cleanup:

(d
 .assign(id_col=range(len(d)))
 .pivot(index=['x_values', 'id_col'], columns='experiment')
 .reset_index()
 .drop(columns='id_col')
 .set_index('x_values')
)

Here's the output:

           y_values     
experiment        e    f
x_values                
12.00           NaN  2.0
12.00           NaN  3.0
12.73          1.00  NaN
13.08          1.47  NaN
13.40          1.54  NaN
23.00           NaN  4.0
33.00           NaN  4.0
弱骨蛰伏 2025-02-14 12:15:59

解决方法是为每个唯一的实验值选择数据,然后加入所有这些数据:

pd.concat([d.loc[d.experiment.eq(c), ['x_values', 'y_values']].rename(columns={'y_values': c}) for c in d.experiment.unique()])

结果:

   x_values     e    f
0     13.40  1.54  NaN
1     13.08  1.47  NaN
2     12.73  1.00  NaN
3     12.00   NaN  2.0
4     33.00   NaN  4.0
5     23.00   NaN  4.0
6     12.00   NaN  3.0

A workaround would be to select data for each unique experiment value and then concat all these data:

pd.concat([d.loc[d.experiment.eq(c), ['x_values', 'y_values']].rename(columns={'y_values': c}) for c in d.experiment.unique()])

Result:

   x_values     e    f
0     13.40  1.54  NaN
1     13.08  1.47  NaN
2     12.73  1.00  NaN
3     12.00   NaN  2.0
4     33.00   NaN  4.0
5     23.00   NaN  4.0
6     12.00   NaN  3.0
时光是把杀猪刀 2025-02-14 12:15:59

您也可以只分配新变量并根据布尔蒙版填充它们:

df=pd.DataFrame(
    data={
        'x_values':[13.4, 13.08, 12.73, 12., 33., 23., 12.],
        'y_values': [1.54, 1.47,1.,2.,4.,4.,3.],
        'experiment':['e', 'e', 'e', 'f', 'f','f','f']
    }
)

df['e'] = df.loc[df['experiment'] == 'e', 'y_values']
df['f'] = df.loc[df['experiment'] == 'f', 'y_values']
df_final = df.drop(columns=['y_values', 'experiment']).set_index(['x_values'])
df_final


-------------------------------------------------
           e    f
x_values        
13.40     1.54  NaN
13.08     1.47  NaN
12.73     1.00  NaN
12.00     NaN   2.0
33.00     NaN   4.0
23.00     NaN   4.0
12.00     NaN   3.0
-------------------------------------------------

如果您对列实验有多个属性,则可以迭代所有唯一值:

for experiment in df['experiment'].unique():
    df[experiment] = df.loc[df['experiment'] == experiment, 'y_values']
df_final = df.drop(columns=['y_values', 'experiment']).set_index(['x_values'])
df_final

这会导致所需的输出。

这种方法似乎比@STEF提供的方法更有效。但是,有更多的代码行。

from time import time

first_approach = []
for i in range(1000):
    start = time()
    pd.concat([df.loc[df.experiment.eq(c), ['x_values', 'y_values']].rename(columns={'y_values': c}) for c in df.experiment.unique()]).set_index(['x_values'])
    first_approach.append(time()-start)
    
second_approach = []
for i in range(1000):
    start = time()
    for experiment in df['experiment'].unique():
        df[experiment] = df.loc[df['experiment'] == experiment, 'y_values']
    df.drop(columns=['y_values', 'experiment']).set_index(['x_values'])
    second_approach.append(time()-start)
    
print(f'Average Time First Approach:\t{sum(first_approach)/len(first_approach):.5f}')
print(f'Average Time Second Approach:\t{sum(second_approach)/len(second_approach):.5f}')

--------------------------------------------
Average Time First Approach:    0.00403
Average Time Second Approach:   0.00205
--------------------------------------------

You could also just assign new variables and fill them according to boolean masks:

df=pd.DataFrame(
    data={
        'x_values':[13.4, 13.08, 12.73, 12., 33., 23., 12.],
        'y_values': [1.54, 1.47,1.,2.,4.,4.,3.],
        'experiment':['e', 'e', 'e', 'f', 'f','f','f']
    }
)

df['e'] = df.loc[df['experiment'] == 'e', 'y_values']
df['f'] = df.loc[df['experiment'] == 'f', 'y_values']
df_final = df.drop(columns=['y_values', 'experiment']).set_index(['x_values'])
df_final


-------------------------------------------------
           e    f
x_values        
13.40     1.54  NaN
13.08     1.47  NaN
12.73     1.00  NaN
12.00     NaN   2.0
33.00     NaN   4.0
23.00     NaN   4.0
12.00     NaN   3.0
-------------------------------------------------

If you have more than one attribute for the column experiment, you can iterate over all unique values:

for experiment in df['experiment'].unique():
    df[experiment] = df.loc[df['experiment'] == experiment, 'y_values']
df_final = df.drop(columns=['y_values', 'experiment']).set_index(['x_values'])
df_final

which results to the desired output.

This approach appears to be more efficient than the approach provided by @Stef. However, with the cost of more lines of code.

from time import time

first_approach = []
for i in range(1000):
    start = time()
    pd.concat([df.loc[df.experiment.eq(c), ['x_values', 'y_values']].rename(columns={'y_values': c}) for c in df.experiment.unique()]).set_index(['x_values'])
    first_approach.append(time()-start)
    
second_approach = []
for i in range(1000):
    start = time()
    for experiment in df['experiment'].unique():
        df[experiment] = df.loc[df['experiment'] == experiment, 'y_values']
    df.drop(columns=['y_values', 'experiment']).set_index(['x_values'])
    second_approach.append(time()-start)
    
print(f'Average Time First Approach:\t{sum(first_approach)/len(first_approach):.5f}')
print(f'Average Time Second Approach:\t{sum(second_approach)/len(second_approach):.5f}')

--------------------------------------------
Average Time First Approach:    0.00403
Average Time Second Approach:   0.00205
--------------------------------------------
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文