在联邦学习中将数据拆分为训练和测试
我是联邦学习的新手 我目前正在按照 TFF 官方文档尝试一个模型。但我遇到了一个问题,希望我能在这里找到一些解释。
我正在使用自己的数据集,数据分布在多个文件中,每个文件都是一个客户端(因为我计划构建模型)。并定义了因变量和自变量。
现在,我的问题是如何在联邦学习中将数据拆分为每个客户端(文件)中的训练集和测试集?就像我们通常在集中式机器学习模型中所做的那样 以下代码是我到目前为止所实现的: 注意我的代码的灵感来自官方文档和这个post 这几乎与我的应用程序类似,但它的目的是将客户端拆分为训练和测试客户端本身,而我的目标是将这些客户端内部的数据拆分。
dataset_paths = {
'client_0': '/content/drive/MyDrive/Colab Notebooks/1.csv',
'client_1': '/content/drive/MyDrive/Colab Notebooks/2.csv',
'client_2': '/content/drive/MyDrive/Colab Notebooks/3.csv'
}
record_defaults = [int(), int(), int(), int(), float(),float(),float(),
float(),float(),float(), int(), int(),float(),float(),int()]
@tf.function
def create_tf_dataset_for_client_fn(dataset_path):
return tf.data.experimental.CsvDataset(dataset_path,
record_defaults=record_defaults,
header=True)
@tf.function
def add_parsing(dataset):
def parse_dataset(*x):
## x defines the dependant varable & y defines the independant
return OrderedDict([('x', x[-1]), ('y', x[1:-1])])
return dataset.map(parse_dataset, num_parallel_calls=tf.data.AUTOTUNE)
source = tff.simulation.datasets.FilePerUserClientData(
dataset_paths, create_tf_dataset_for_client_fn)
source = source.preprocess(add_parsing)
## Creat the the datasets from client data
dataset_creation=source.create_tf_dataset_for_client(source.client_ids[0-2])
print(dataset_creation)
>>> _VariantDataset element_spec=OrderedDict([('x', TensorSpec(shape=(), dtype=tf.int32, name=None)), ('y', (TensorSpec(shape=(), dtype=tf.int32, name=None), TensorSpec(shape=(), dtype=tf.int32, name=None), TensorSpec(shape=(), dtype=tf.int32, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(), dtype=tf.int32, name=None)))])>
## Convert the x into array(I think it is necessary for spliting to training and testing sets )
test= tf.nest.map_structure(lambda x: x.numpy(),next(iter(dataset_creation)))
print(test)
>>> OrderedDict([('x', 1), ('y', (0, 1, 9, 85.0, 7.75, 85.0, 95.0, 75.0, 50.0, 6))])
我对监督机器学习的理解是将数据分成训练集和测试集,如下面的代码所示,我不确定如何在联邦学习中做到这一点以及它是否会以这种方式工作?
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4, random_state = 42)
因此,我正在寻找对此问题的解释,以便我可以进入培训阶段。
I am new in federated learning
I am currently experimenting with a model by following the official TFF documentation. But I am stuck with an issue and hope I find some explanation here.
I am using my own dataset, the data are distributed in multiple files, each file is a single client (as I am planning to structure the model). and the dependant and independent variables have been defined.
Now, my question is how can I split the data into training and testing sets in each client(file) in federated learning? like what we -normally- do in the centralized ML models
The following code is what I have implemented so far:
note my code is inspired by the official documentation and this post which is almost similar to my application, but it aims to split the clients as training and testing clients itself while my aim is to split the data inside these clients.
dataset_paths = {
'client_0': '/content/drive/MyDrive/Colab Notebooks/1.csv',
'client_1': '/content/drive/MyDrive/Colab Notebooks/2.csv',
'client_2': '/content/drive/MyDrive/Colab Notebooks/3.csv'
}
record_defaults = [int(), int(), int(), int(), float(),float(),float(),
float(),float(),float(), int(), int(),float(),float(),int()]
@tf.function
def create_tf_dataset_for_client_fn(dataset_path):
return tf.data.experimental.CsvDataset(dataset_path,
record_defaults=record_defaults,
header=True)
@tf.function
def add_parsing(dataset):
def parse_dataset(*x):
## x defines the dependant varable & y defines the independant
return OrderedDict([('x', x[-1]), ('y', x[1:-1])])
return dataset.map(parse_dataset, num_parallel_calls=tf.data.AUTOTUNE)
source = tff.simulation.datasets.FilePerUserClientData(
dataset_paths, create_tf_dataset_for_client_fn)
source = source.preprocess(add_parsing)
## Creat the the datasets from client data
dataset_creation=source.create_tf_dataset_for_client(source.client_ids[0-2])
print(dataset_creation)
>>> _VariantDataset element_spec=OrderedDict([('x', TensorSpec(shape=(), dtype=tf.int32, name=None)), ('y', (TensorSpec(shape=(), dtype=tf.int32, name=None), TensorSpec(shape=(), dtype=tf.int32, name=None), TensorSpec(shape=(), dtype=tf.int32, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(), dtype=tf.int32, name=None)))])>
## Convert the x into array(I think it is necessary for spliting to training and testing sets )
test= tf.nest.map_structure(lambda x: x.numpy(),next(iter(dataset_creation)))
print(test)
>>> OrderedDict([('x', 1), ('y', (0, 1, 9, 85.0, 7.75, 85.0, 95.0, 75.0, 50.0, 6))])
My understanding to supervised ML is to split the data into training and testing sets as in the below code, I am not sure how to do this in Federated learning and whether it will work this way or not?
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4, random_state = 42)
So, please I am looking for an explanation for this issue so I can proceed to the training phase.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
请参阅此教程。您应该能够根据客户端及其数据创建两个数据集(训练和测试):
如果您按照我链接的教程进行操作,您应该能够将分割数据直接提供给 tff.learning.from_keras_model< /代码>。
See this tutorial. You should be able to create two datasets (train and test) based on the clients and their data:
If you follow the the tutorial I linked, you should be able to feed the split data directly to
tff.learning.from_keras_model
.