复制数据时,在熊猫数据框中修复数据
到目前为止,我不必对大熊猫做任何繁重的工作,现在我有一些情况,可以使用一些指导。
我有一些生成以下数据框的代码:
ID_x HOST_NM IP_ADDRESS_x SERIAL_x ID_y IP_ADDRESS_y COST PURCHASE_DATE ID IP_ADDRESS SERIAL_y OS
0 62bf philip 192.168.1.115 12345 32 192.168.1.115 36.78 2018-05-05 2 192.168.1.115 NaN Debian 11 Linux
1 3a73 vic 192.168.1.145 17B0P 33 192.168.1.145 749.64 2018-07-26 3 192.168.1.145 17B0P DSM 7.1-42661
2 4237 BILL 192.168.1.99 NaN 31 192.168.1.99 3584.83 2018-03-15 1 192.168.1.99 38174 Windows 10 LTSC
3 3027 tim 192.168.1.96 C02G7 34 192.168.1.96 1289.00 2021-10-13 4 192.168.1.100 C02G7 macOS Monterey 12.4
此数据框架是通过其他三个数据范围的外部合并生成的。重复数据的意图。如果我们在该行上有一个主机名和一致的IP地址,那么想法是填写丢失的序列号。我考虑过将所有IP地址连续获取,如果它们是80%一致的,请使用该地址,否则NAN。
然后,在稍后的阶段,我将丢弃冗余列。
这是一个POC。以上是一个小示例数据集,但是实际数据集包含约35K唯一的设备(行)和112列(4组冗余数据)。
我没有看到熊猫中的任何东西看起来是为这种情况量身定制的。这是错误的吗?
可以在github 上找到此处使用的代码。
上面引用的代码如下。
import logging
from functools import reduce
# import numpy
import pandas
log = logging.getLogger("merge")
log.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
log.addHandler(ch)
r1 = [
{
'ID': '62bf',
'HOST_NM': 'philip',
'IP_ADDRESS': '192.168.1.115',
'SERIAL': '12345',
},
{
'ID': '3a73',
'HOST_NM': 'vic',
'IP_ADDRESS': '192.168.1.145',
'SERIAL': '17B0P',
},
{
'ID': '4237',
'HOST_NM': 'BILL',
'IP_ADDRESS': '192.168.1.99',
'SERIAL': '',
},
{
'ID': '3027',
'HOST_NM': 'tim',
'IP_ADDRESS': '192.168.1.96',
'SERIAL': 'C02G7',
},
]
r2 = [
{
'ID': '34',
'HOST_NM': 'tim',
'IP_ADDRESS': '192.168.1.96',
'COST': '1289.00',
'PURCHASE_DATE': '2021-10-13',
},
{
'ID': '33',
'HOST_NM': 'vic',
'IP_ADDRESS': '192.168.1.145',
'COST': '749.64',
'PURCHASE_DATE': '2018-07-26',
},
{
'ID': '31',
'HOST_NM': 'BILL',
'IP_ADDRESS': '192.168.1.99',
'COST': '3584.83',
'PURCHASE_DATE': '2018-03-15',
},
{
'ID': '32',
'HOST_NM': 'philip',
'IP_ADDRESS': '192.168.1.115',
'COST': '36.78',
'PURCHASE_DATE': '2018-05-05',
},
]
r3 = [
{
'ID': '2',
'HOST_NM': 'philip',
'IP_ADDRESS': '192.168.1.115',
'SERIAL': '',
'OS': 'Debian 11 Linux',
},
{
'ID': '3',
'HOST_NM': 'vic',
'IP_ADDRESS': '192.168.1.145',
'SERIAL': '17B0P',
'OS': 'DSM 7.1-42661',
},
{
'ID': '1',
'HOST_NM': 'BILL',
'IP_ADDRESS': '192.168.1.99',
'SERIAL': '38174',
'OS': 'Windows 10 LTSC',
},
{
'ID': '4',
'HOST_NM': 'tim',
'IP_ADDRESS': '192.168.1.100',
'SERIAL': 'C02G7',
'OS': 'macOS Monterey 12.4',
},
]
def unique(l: list) -> list:
u = []
for e in l:
if e not in u:
u.append(e)
return list(u)
df1 = pandas.DataFrame(r1)
df2 = pandas.DataFrame(r2)
df3 = pandas.DataFrame(r3)
df_list = [df1, df2, df3]
df_keys = {
0: ["ID", "SERIAL"],
1: ["HOST_NM"],
2: ["HOST_NM", "SERIAL"],
}
target_columns = ["HOST_NM", "SERIAL", "IP_ADDRESS"]
df = reduce(lambda left, right: pandas.merge(
left, right, on=["HOST_NM"], how="outer"), df_list)
log.debug(df)
# Replace null and empty strings with numpy.NaN
# df = df.replace(r"^\s*$", numpy.NaN, regex=True)
df = df.mask(df == '')
log.debug(f'\n\n{df}')
I've not had to do any heavy lifting with Pandas until now, and now I've got a bit of a situation and can use some guidance.
I've got some code that generates the following dataframe:
ID_x HOST_NM IP_ADDRESS_x SERIAL_x ID_y IP_ADDRESS_y COST PURCHASE_DATE ID IP_ADDRESS SERIAL_y OS
0 62bf philip 192.168.1.115 12345 32 192.168.1.115 36.78 2018-05-05 2 192.168.1.115 NaN Debian 11 Linux
1 3a73 vic 192.168.1.145 17B0P 33 192.168.1.145 749.64 2018-07-26 3 192.168.1.145 17B0P DSM 7.1-42661
2 4237 BILL 192.168.1.99 NaN 31 192.168.1.99 3584.83 2018-03-15 1 192.168.1.99 38174 Windows 10 LTSC
3 3027 tim 192.168.1.96 C02G7 34 192.168.1.96 1289.00 2021-10-13 4 192.168.1.100 C02G7 macOS Monterey 12.4
This dataframe is generated via an outer merge of three other dataframes. The duplicate data is intended. The idea is to fill in missing serial numbers if we have a hostname and consistent IP address on that row; I thought about getting all of the IP addresses in a row and if they are 80% consistent, use that address, else NaN.
Then at a later stage, I'll drop the redundant columns.
This is a PoC. The above is a small example data set, but the actual data set contains around 35K unique devices (rows) and 112 columns (4 sets of redundant data).
I'm not seeing anything in Pandas that looks like it is tailor-made for this kind of situation. Am a wrong about that?
Further examples and the code used here can be found on github here.
The code referenced above follows below.
import logging
from functools import reduce
# import numpy
import pandas
log = logging.getLogger("merge")
log.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
log.addHandler(ch)
r1 = [
{
'ID': '62bf',
'HOST_NM': 'philip',
'IP_ADDRESS': '192.168.1.115',
'SERIAL': '12345',
},
{
'ID': '3a73',
'HOST_NM': 'vic',
'IP_ADDRESS': '192.168.1.145',
'SERIAL': '17B0P',
},
{
'ID': '4237',
'HOST_NM': 'BILL',
'IP_ADDRESS': '192.168.1.99',
'SERIAL': '',
},
{
'ID': '3027',
'HOST_NM': 'tim',
'IP_ADDRESS': '192.168.1.96',
'SERIAL': 'C02G7',
},
]
r2 = [
{
'ID': '34',
'HOST_NM': 'tim',
'IP_ADDRESS': '192.168.1.96',
'COST': '1289.00',
'PURCHASE_DATE': '2021-10-13',
},
{
'ID': '33',
'HOST_NM': 'vic',
'IP_ADDRESS': '192.168.1.145',
'COST': '749.64',
'PURCHASE_DATE': '2018-07-26',
},
{
'ID': '31',
'HOST_NM': 'BILL',
'IP_ADDRESS': '192.168.1.99',
'COST': '3584.83',
'PURCHASE_DATE': '2018-03-15',
},
{
'ID': '32',
'HOST_NM': 'philip',
'IP_ADDRESS': '192.168.1.115',
'COST': '36.78',
'PURCHASE_DATE': '2018-05-05',
},
]
r3 = [
{
'ID': '2',
'HOST_NM': 'philip',
'IP_ADDRESS': '192.168.1.115',
'SERIAL': '',
'OS': 'Debian 11 Linux',
},
{
'ID': '3',
'HOST_NM': 'vic',
'IP_ADDRESS': '192.168.1.145',
'SERIAL': '17B0P',
'OS': 'DSM 7.1-42661',
},
{
'ID': '1',
'HOST_NM': 'BILL',
'IP_ADDRESS': '192.168.1.99',
'SERIAL': '38174',
'OS': 'Windows 10 LTSC',
},
{
'ID': '4',
'HOST_NM': 'tim',
'IP_ADDRESS': '192.168.1.100',
'SERIAL': 'C02G7',
'OS': 'macOS Monterey 12.4',
},
]
def unique(l: list) -> list:
u = []
for e in l:
if e not in u:
u.append(e)
return list(u)
df1 = pandas.DataFrame(r1)
df2 = pandas.DataFrame(r2)
df3 = pandas.DataFrame(r3)
df_list = [df1, df2, df3]
df_keys = {
0: ["ID", "SERIAL"],
1: ["HOST_NM"],
2: ["HOST_NM", "SERIAL"],
}
target_columns = ["HOST_NM", "SERIAL", "IP_ADDRESS"]
df = reduce(lambda left, right: pandas.merge(
left, right, on=["HOST_NM"], how="outer"), df_list)
log.debug(df)
# Replace null and empty strings with numpy.NaN
# df = df.replace(r"^\s*quot;, numpy.NaN, regex=True)
df = df.mask(df == '')
log.debug(f'\n\n{df}')
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论