TFF:train_test_client_split 对每个客户端数据进行分区
我正在构建一个联邦学习模型。 我已经编写了下面的代码,但我不断收到错误,这也是不正确的 请告诉我如何正确使用train_test_client_split
函数?
@tf.function
def create_tf_dataset_for_client_fn(dataset_path):
return tf.data.experimental.CsvDataset(
dataset_path, record_defaults=record_defaults, header=True )
source = tff.simulation.datasets.FilePerUserClientData(
dataset_paths, create_tf_dataset_for_client_fn)
print(source.client_ids)
>> ['client_0', 'client_1', 'client_2']
@classmethod
def from_clients_and_fn():
client_ids: Iterable[str]
create_tf_dataset_for_client_fn: Callable[[str], tf.data.Dataset]
Splitting=source.from_clients_and_tf_fn(['client_0', 'client_1', 'client_2'],create_tf_dataset_for_client_fn)
source.train_test_client_split(client_data=Splitting,
num_test_clients=1)
NotFoundError:client_1;没有这样的文件或目录 [Op:IteratorGetNext]
该文件在那里并且路径是正确的,但我不知道这里的问题是什么?
I am building a federated learning model.
I have written the code below, but I keep getting the error, which is also not true
please let me know how to use the function train_test_client_split
properly?
@tf.function
def create_tf_dataset_for_client_fn(dataset_path):
return tf.data.experimental.CsvDataset(
dataset_path, record_defaults=record_defaults, header=True )
source = tff.simulation.datasets.FilePerUserClientData(
dataset_paths, create_tf_dataset_for_client_fn)
print(source.client_ids)
>> ['client_0', 'client_1', 'client_2']
@classmethod
def from_clients_and_fn():
client_ids: Iterable[str]
create_tf_dataset_for_client_fn: Callable[[str], tf.data.Dataset]
Splitting=source.from_clients_and_tf_fn(['client_0', 'client_1', 'client_2'],create_tf_dataset_for_client_fn)
source.train_test_client_split(client_data=Splitting,
num_test_clients=1)
NotFoundError: client_1; No such file or directory [Op:IteratorGetNext]
The file is there and the path is correct, but I don't know what it the problem here?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您只需要正确的数据结构。尝试如下操作。
创建虚拟数据
加载、处理和分割数据:
You just need the correct data structure. Try something like the following.
Create dummy data
Load, process, and split data: