并行化熊猫中的虚拟数据生成

发布于 2025-02-12 17:44:51 字数 801 浏览 2 评论 0原文

我想生成一个使用多个处理器N内核组成的虚拟数据集和4000万个记录的姓氏。

以下是一个单个任务循环,该循环生成名字和姓氏,并将其附加到列表:

import pandas as pd
from faker import Faker

def fake_data_generation(records):
    fake = Faker(['en_US','en_GB'])
    
    person = []
    
    for i in range(records):
        first_name = fake.first_name()
        last_name = fake.last_name()
        person.append({"First_Name": first_name,
                       "Last_Name": last_name}
                     )
    return person

输出:

for i in range(5):
    df = pd.DataFrame(fake_data_generation(i))

>>> df
  First_Name Last_Name
0      Colin   Stewart
1    Barbara      Rios
2     Victor     Green
3  Stephanie     Booth

I would like to generate a dummy dataset composed of a fake first name and a last name for 40 milion records using multiple processor n cores.

Below is a single task loop that generates a first name and a last name and appends them to a list:

import pandas as pd
from faker import Faker

def fake_data_generation(records):
    fake = Faker(['en_US','en_GB'])
    
    person = []
    
    for i in range(records):
        first_name = fake.first_name()
        last_name = fake.last_name()
        person.append({"First_Name": first_name,
                       "Last_Name": last_name}
                     )
    return person

Output:

for i in range(5):
    df = pd.DataFrame(fake_data_generation(i))

>>> df
  First_Name Last_Name
0      Colin   Stewart
1    Barbara      Rios
2     Victor     Green
3  Stephanie     Booth

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

后来的我们 2025-02-19 17:44:51

也许您可以直接使用提供程序

import pandas as pd
import numpy as np
from faker.providers.person.en_US import Provider as us
from faker.providers.person.en_GB import Provider as gb

first_names = list(set(us.first_names).union(gb.first_names))
last_names = list(set(us.last_names).union(gb.last_names))

N = 40_000_000
df = pd.DataFrame({'First_Name': np.random.choice(first_names, N),
                   'Last_Name': np.random.choice(last_names, N)})

输出:

>>> df
         First_Name Last_Name
0             Kayla      Tran
1              Gary     Bates
2             Daisy   Leblanc
3           Tiffany     Ahmed
4            Kellie       May
...             ...       ...
39999995   Kristine   Collier
39999996      Joyce     Mccoy
39999997       Paul   Padilla
39999998      Tonya     Bevan
39999999      Julie    Bright

[40000000 rows x 2 columns]

Maybe you can use providers directly:

import pandas as pd
import numpy as np
from faker.providers.person.en_US import Provider as us
from faker.providers.person.en_GB import Provider as gb

first_names = list(set(us.first_names).union(gb.first_names))
last_names = list(set(us.last_names).union(gb.last_names))

N = 40_000_000
df = pd.DataFrame({'First_Name': np.random.choice(first_names, N),
                   'Last_Name': np.random.choice(last_names, N)})

Output:

>>> df
         First_Name Last_Name
0             Kayla      Tran
1              Gary     Bates
2             Daisy   Leblanc
3           Tiffany     Ahmed
4            Kellie       May
...             ...       ...
39999995   Kristine   Collier
39999996      Joyce     Mccoy
39999997       Paul   Padilla
39999998      Tonya     Bevan
39999999      Julie    Bright

[40000000 rows x 2 columns]
╰◇生如夏花灿烂 2025-02-19 17:44:51

我尝试了下面与我合作的下面。我感谢任何评论或修改,以更好地表现或减少任何不必要的步骤。

from joblib import Parallel, delayed
import pandas as pd
from faker import Faker
from itertools import chain

fake = Faker(['en_US','en_GB'])

def generate_names_df():
    names = []
    first_name = fake.first_name()
    last_name = fake.last_name()
    names.append({"First_Name": first_name,
                  "Last_Name": last_name}
                )
    return names

results = Parallel(n_jobs=15)(delayed(generate_names_df)() for i in range(40000000))
results_unlisted = list(chain(*results))
df = pd.DataFrame(results_unlisted)

>>> df.shape
(40000000, 2)

I have attempted the below that worked with me. I'd appreciate any reviews or modifications for better performance or reducing any unnecessary steps.

from joblib import Parallel, delayed
import pandas as pd
from faker import Faker
from itertools import chain

fake = Faker(['en_US','en_GB'])

def generate_names_df():
    names = []
    first_name = fake.first_name()
    last_name = fake.last_name()
    names.append({"First_Name": first_name,
                  "Last_Name": last_name}
                )
    return names

results = Parallel(n_jobs=15)(delayed(generate_names_df)() for i in range(40000000))
results_unlisted = list(chain(*results))
df = pd.DataFrame(results_unlisted)

>>> df.shape
(40000000, 2)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文