缺少所需的位置参数:

发布于 2025-01-21 16:36:11 字数 808 浏览 0 评论 0原文

我试图根据LSTM方法实施联合学习。

def create_keras_model():
    model = Sequential()
    model.add(LSTM(32, input_shape=(3,1)))
    model.add(Dense(1))
    return model

def model_fn():
    keras_model = create_keras_model()
    return tff.learning.from_keras_model(
      keras_model,
      input_spec=(look_back, 1),
      loss=tf.keras.losses.mean_squared_error(),
      metrics=[tf.keras.metrics.mean_squared_error()])

但是,当我想定义iterative_process时,我会遇到这个错误。

iterative_process = tff.learning.build_federated_averaging_process(
    model_fn,
    client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.001),
    server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))

TypeError: Missing required positional argument

我该如何修复?

I tried to implement federated learning based on the LSTM approach.

def create_keras_model():
    model = Sequential()
    model.add(LSTM(32, input_shape=(3,1)))
    model.add(Dense(1))
    return model

def model_fn():
    keras_model = create_keras_model()
    return tff.learning.from_keras_model(
      keras_model,
      input_spec=(look_back, 1),
      loss=tf.keras.losses.mean_squared_error(),
      metrics=[tf.keras.metrics.mean_squared_error()])

but I got this error when I want to define iterative_process.

iterative_process = tff.learning.build_federated_averaging_process(
    model_fn,
    client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.001),
    server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))

TypeError: Missing required positional argument

How do I fix it?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

沉睡月亮 2025-01-28 16:36:11

与环回参数相匹配的提供的输入要求可以替换为客户列车数据要求。 (tensorspec) federated

您可以通过不同类型的输入参数并行工作。

[示例]:

import tensorflow as tf
import tensorflow_federated as tff

# Load simulation data.
source, _ = tff.simulation.datasets.emnist.load_data()

def client_data(n):
    return source.create_tf_dataset_for_client(source.client_ids[n]).map(
    lambda e: (tf.reshape(e['pixels'], [-1]), e['label'])
    ).repeat(10).batch(20)

train_data = [client_data(n) for n in range(3)]


def create_keras_model():
    model = tf.keras.models.Sequential([ ])
    model.add(tf.keras.layers.InputLayer(input_shape=( 784 )))
    model.add(tf.keras.layers.Reshape((784, 1)))
    model.add(tf.keras.layers.LSTM(32))
    model.add(tf.keras.layers.Dense(1))
    return model

def model_fn():
    keras_model = create_keras_model()
    return tff.learning.from_keras_model( keras_model, 
    input_spec=train_data[0].element_spec,
    loss=tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.AUTO), 
    metrics=[tf.keras.metrics.MeanSquaredError( ), tf.keras.metrics.Accuracy( )])

# Simulate a few rounds of training with the selected client devices.
trainer = tff.learning.build_federated_averaging_process( model_fn, client_optimizer_fn=lambda : tf.keras.optimizers.SGD(0.1),
    server_optimizer_fn=lambda : tf.keras.optimizers.SGD(1.0))

state = trainer.initialize()
for _ in range(50):
    state, metrics = trainer.next(state, train_data)
    #print(metrics['train']['loss'])
    print(metrics)

[输出]:

OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('mean_squared_error', 8.502816), ('accuracy', 0.0), ('loss', 8.5030365)]))])
OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('mean_squared_error', 8.500688), ('accuracy', 0.0), ('loss', 8.500914)]))])
OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('mean_squared_error', 8.498711), ('accuracy', 0.0), ('loss', 8.498943)]))])

The provided input requirements matching the loopback parameters may replace by the client train data requirements. ( TensorSpec ) federated

You can do works parallel by different types of input parameters.

[ Sample ]:

import tensorflow as tf
import tensorflow_federated as tff

# Load simulation data.
source, _ = tff.simulation.datasets.emnist.load_data()

def client_data(n):
    return source.create_tf_dataset_for_client(source.client_ids[n]).map(
    lambda e: (tf.reshape(e['pixels'], [-1]), e['label'])
    ).repeat(10).batch(20)

train_data = [client_data(n) for n in range(3)]


def create_keras_model():
    model = tf.keras.models.Sequential([ ])
    model.add(tf.keras.layers.InputLayer(input_shape=( 784 )))
    model.add(tf.keras.layers.Reshape((784, 1)))
    model.add(tf.keras.layers.LSTM(32))
    model.add(tf.keras.layers.Dense(1))
    return model

def model_fn():
    keras_model = create_keras_model()
    return tff.learning.from_keras_model( keras_model, 
    input_spec=train_data[0].element_spec,
    loss=tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.AUTO), 
    metrics=[tf.keras.metrics.MeanSquaredError( ), tf.keras.metrics.Accuracy( )])

# Simulate a few rounds of training with the selected client devices.
trainer = tff.learning.build_federated_averaging_process( model_fn, client_optimizer_fn=lambda : tf.keras.optimizers.SGD(0.1),
    server_optimizer_fn=lambda : tf.keras.optimizers.SGD(1.0))

state = trainer.initialize()
for _ in range(50):
    state, metrics = trainer.next(state, train_data)
    #print(metrics['train']['loss'])
    print(metrics)

[ Output ]:

OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('mean_squared_error', 8.502816), ('accuracy', 0.0), ('loss', 8.5030365)]))])
OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('mean_squared_error', 8.500688), ('accuracy', 0.0), ('loss', 8.500914)]))])
OrderedDict([('broadcast', ()), ('aggregation', OrderedDict([('value_sum_process', ()), ('weight_sum_process', ())])), ('train', OrderedDict([('mean_squared_error', 8.498711), ('accuracy', 0.0), ('loss', 8.498943)]))])
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文