为什么在使用分布策略时,2D卷积的反向传播失败了?
我遵循TensorFlow的教程,启用了Multi GPU培训(来自一台计算机),并使用我的自定义培训循环进行分配策略: https://www.tensorflow.org/guide/distributed_training
? tf.distribute.experiment.centralstoragestrategy
,但两者都会给我带来以下错误
Traceback (most recent call last):
File "train.py", line 468, in <module>
app.run(run_main)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/absl/app.py", line 312, in run
_run_main(main, args)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
sys.exit(main(argv))
File "train.py", line 462, in run_main
main(**kwargs)
File "train.py", line 424, in main
trainer.training_loop(train_dataset, test_datasets, distribute_strategy=strategy)
File "train.py", line 271, in training_loop
distribute_strategy.run(self.run_train_step, args=(X, y, y_prev, write_image_examples))
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica
return fn(*args, **kwargs)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 54, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:
Detected at node 'gradient_tape/SINet/si_net/s2_module_5/conv2d_16/grouped_0/conv2d_35/Conv2D/Conv2DBackpropInput' defined at (most recent call last):
File "train.py", line 468, in <module>
app.run(run_main)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/absl/app.py", line 312, in run
_run_main(main, args)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
sys.exit(main(argv))
File "train.py", line 462, in run_main
main(**kwargs)
File "train.py", line 424, in main
trainer.training_loop(train_dataset, test_datasets, distribute_strategy=strategy)
File "train.py", line 271, in training_loop
distribute_strategy.run(self.run_train_step, args=(X, y, y_prev, write_image_examples))
File "train.py", line 172, in run_train_step
gradients = tape.gradient(overall_loss, self.model.trainable_weights)
Node: 'gradient_tape/SINet/si_net/s2_module_5/conv2d_16/grouped_0/conv2d_35/Conv2D/Conv2DBackpropInput'
Detected at node 'gradient_tape/SINet/si_net/s2_module_5/conv2d_16/grouped_0/conv2d_35/Conv2D/Conv2DBackpropInput' defined at (most recent call last):
File "train.py", line 468, in <module>
app.run(run_main)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/absl/app.py", line 312, in run
_run_main(main, args)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
sys.exit(main(argv))
File "train.py", line 462, in run_main
main(**kwargs)
File "train.py", line 424, in main
trainer.training_loop(train_dataset, test_datasets, distribute_strategy=strategy)
File "train.py", line 271, in training_loop
distribute_strategy.run(self.run_train_step, args=(X, y, y_prev, write_image_examples))
File "train.py", line 172, in run_train_step
gradients = tape.gradient(overall_loss, self.model.trainable_weights)
Node: 'gradient_tape/SINet/si_net/s2_module_5/conv2d_16/grouped_0/conv2d_35/Conv2D/Conv2DBackpropInput'
2 root error(s) found.
(0) INVALID_ARGUMENT: Conv2DSlowBackpropInput: Size of out_backprop doesn't match computed: actual = 32, computed = 96 spatial_dim: 3 input: 96 filter: 1 output: 32 stride: 1 dilation: 1
[[{{node gradient_tape/SINet/si_net/s2_module_5/conv2d_16/grouped_0/conv2d_35/Conv2D/Conv2DBackpropInput}}]]
[[cond/then/_117/cond/train/image/write_summary/ReadVariableOp/_162]]
(1) INVALID_ARGUMENT: Conv2DSlowBackpropInput: Size of out_backprop doesn't match computed: actual = 32, computed = 96 spatial_dim: 3 input: 96 filter: 1 output: 32 stride: 1 dilation: 1
[[{{node gradient_tape/SINet/si_net/s2_module_5/conv2d_16/grouped_0/conv2d_35/Conv2D/Conv2DBackpropInput}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference_run_train_step_59237]
2.8的错误,但是我也尝试了2.9,并遇到了相同的错误。当我不使用分发策略时,培训非常好。什么可能导致问题?数据集是相同的(除了由策略分发,就像教程的指示一样),并且模型结构不会改变,因此形状错误对我来说绝对没有意义。
这是我的一些代码,以防这有帮助。
def main(...):
physical_gpus = tf.config.experimental.list_physical_devices('GPU')
num_gpu = len(physical_gpus)
for gpu in physical_gpus:
tf.config.experimental.set_memory_growth(gpu, True)
if num_gpu > 1:
strategy = tf.distribute.MirroredStrategy()
# strategy = tf.distribute.experimental.CentralStorageStrategy()
else:
strategy = tf.distribute.get_strategy()
tf.get_logger().info('Distribute strategy: {}'.format(strategy))
with strategy.scope():
dataset_loader = DatasetLoader(...)
train_dataset, test_datasets = dataset_loader.prepare(
datasets_path=datasets_path, distribute_strategy=strategy)
model = Model(...)
trainer = Train(...)
trainer.training_loop(train_dataset, test_datasets, distribute_strategy=strategy)
class Train(object):
[...]
def training_loop(self, training_dataset: tf.data.Dataset, testing_datasets: Dict, distribute_strategy: tf.distribute.Strategy):
for epoch in tf.range(self.epoch, self.num_epochs):
for batch_num, (X, y, y_prev) in enumerate(training_dataset):
tf.get_logger().info(f'starting batch inference')
start = time.time()
distribute_strategy.run(self.run_train_step, args=(X, y, y_prev))
tf.get_logger().info(f'batch inference took {time.time() - start}s')
@tf.function
def run_train_step(self, image_channels, label, previous_label):
with tf.GradientTape() as tape:
mask = self.model(image_channels, training=True)
pred_loss = self.compute_loss(label, mask)
gradients = tape.gradient(pred_loss, self.model.trainable_weights) # CRASHES HERE!!!!!!!!
self.optimizer.apply_gradients(zip(gradients, self.model.trainable_weights))
class DatasetLoader(object):
[...]
def prepare(self, datasets_path="./data", skip_train=False, shuffle=True, distribute_strategy=None):
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.DATA
train_dataset = None if skip_train else self._load_dataset(datasets_path, "trainA", "trainB", options, training=True, shuffle=shuffle)
portrait_test_dataset = self._load_dataset(datasets_path, "testPortraitA", "testPortraitB", options, training=False, shuffle=shuffle)
video_test_dataset = self._load_dataset(datasets_path, "testVideoA", "testVideoB", options, training=False, shuffle=shuffle)
test_datasets_dict = {"portrait": portrait_test_dataset, "video": video_test_dataset}
if distribute_strategy is not None:
train_dataset = distribute_strategy.experimental_distribute_dataset(train_dataset)
for key in test_datasets_dict:
test_datasets_dict[key] = distribute_strategy.experimental_distribute_dataset(test_datasets_dict[key])
return train_dataset, test_datasets_dict
I followed the tutorial of Tensorflow to enable multi GPU training (from a single computer) with a distribute strategy for my custom training loop: https://www.tensorflow.org/guide/distributed_training?hl=en#use_tfdistributestrategy_with_custom_training_loops
I tried using the tf.distribute.MirroredStrategy
as well as the tf.distribute.experimental.CentralStorageStrategy
but both give me the following error
Traceback (most recent call last):
File "train.py", line 468, in <module>
app.run(run_main)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/absl/app.py", line 312, in run
_run_main(main, args)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
sys.exit(main(argv))
File "train.py", line 462, in run_main
main(**kwargs)
File "train.py", line 424, in main
trainer.training_loop(train_dataset, test_datasets, distribute_strategy=strategy)
File "train.py", line 271, in training_loop
distribute_strategy.run(self.run_train_step, args=(X, y, y_prev, write_image_examples))
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py", line 1312, in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py", line 2888, in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py", line 3689, in _call_for_each_replica
return fn(*args, **kwargs)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 54, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:
Detected at node 'gradient_tape/SINet/si_net/s2_module_5/conv2d_16/grouped_0/conv2d_35/Conv2D/Conv2DBackpropInput' defined at (most recent call last):
File "train.py", line 468, in <module>
app.run(run_main)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/absl/app.py", line 312, in run
_run_main(main, args)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
sys.exit(main(argv))
File "train.py", line 462, in run_main
main(**kwargs)
File "train.py", line 424, in main
trainer.training_loop(train_dataset, test_datasets, distribute_strategy=strategy)
File "train.py", line 271, in training_loop
distribute_strategy.run(self.run_train_step, args=(X, y, y_prev, write_image_examples))
File "train.py", line 172, in run_train_step
gradients = tape.gradient(overall_loss, self.model.trainable_weights)
Node: 'gradient_tape/SINet/si_net/s2_module_5/conv2d_16/grouped_0/conv2d_35/Conv2D/Conv2DBackpropInput'
Detected at node 'gradient_tape/SINet/si_net/s2_module_5/conv2d_16/grouped_0/conv2d_35/Conv2D/Conv2DBackpropInput' defined at (most recent call last):
File "train.py", line 468, in <module>
app.run(run_main)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/absl/app.py", line 312, in run
_run_main(main, args)
File "/home/rroyerrivard/repos/research_sinet/.venv/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main
sys.exit(main(argv))
File "train.py", line 462, in run_main
main(**kwargs)
File "train.py", line 424, in main
trainer.training_loop(train_dataset, test_datasets, distribute_strategy=strategy)
File "train.py", line 271, in training_loop
distribute_strategy.run(self.run_train_step, args=(X, y, y_prev, write_image_examples))
File "train.py", line 172, in run_train_step
gradients = tape.gradient(overall_loss, self.model.trainable_weights)
Node: 'gradient_tape/SINet/si_net/s2_module_5/conv2d_16/grouped_0/conv2d_35/Conv2D/Conv2DBackpropInput'
2 root error(s) found.
(0) INVALID_ARGUMENT: Conv2DSlowBackpropInput: Size of out_backprop doesn't match computed: actual = 32, computed = 96 spatial_dim: 3 input: 96 filter: 1 output: 32 stride: 1 dilation: 1
[[{{node gradient_tape/SINet/si_net/s2_module_5/conv2d_16/grouped_0/conv2d_35/Conv2D/Conv2DBackpropInput}}]]
[[cond/then/_117/cond/train/image/write_summary/ReadVariableOp/_162]]
(1) INVALID_ARGUMENT: Conv2DSlowBackpropInput: Size of out_backprop doesn't match computed: actual = 32, computed = 96 spatial_dim: 3 input: 96 filter: 1 output: 32 stride: 1 dilation: 1
[[{{node gradient_tape/SINet/si_net/s2_module_5/conv2d_16/grouped_0/conv2d_35/Conv2D/Conv2DBackpropInput}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference_run_train_step_59237]
That is with Tensorflow 2.8, but I also tried 2.9 and got the same error. The training goes perfectly well when I'm not using a distribute strategy. What could cause the issue? The dataset is the same (apart from being distributed by the strategy just like the tutorial is instructing) and the model structure doesn't change, so that shape error makes absolutely no sense to me.
Here is some of my code, in case this helps.
def main(...):
physical_gpus = tf.config.experimental.list_physical_devices('GPU')
num_gpu = len(physical_gpus)
for gpu in physical_gpus:
tf.config.experimental.set_memory_growth(gpu, True)
if num_gpu > 1:
strategy = tf.distribute.MirroredStrategy()
# strategy = tf.distribute.experimental.CentralStorageStrategy()
else:
strategy = tf.distribute.get_strategy()
tf.get_logger().info('Distribute strategy: {}'.format(strategy))
with strategy.scope():
dataset_loader = DatasetLoader(...)
train_dataset, test_datasets = dataset_loader.prepare(
datasets_path=datasets_path, distribute_strategy=strategy)
model = Model(...)
trainer = Train(...)
trainer.training_loop(train_dataset, test_datasets, distribute_strategy=strategy)
class Train(object):
[...]
def training_loop(self, training_dataset: tf.data.Dataset, testing_datasets: Dict, distribute_strategy: tf.distribute.Strategy):
for epoch in tf.range(self.epoch, self.num_epochs):
for batch_num, (X, y, y_prev) in enumerate(training_dataset):
tf.get_logger().info(f'starting batch inference')
start = time.time()
distribute_strategy.run(self.run_train_step, args=(X, y, y_prev))
tf.get_logger().info(f'batch inference took {time.time() - start}s')
@tf.function
def run_train_step(self, image_channels, label, previous_label):
with tf.GradientTape() as tape:
mask = self.model(image_channels, training=True)
pred_loss = self.compute_loss(label, mask)
gradients = tape.gradient(pred_loss, self.model.trainable_weights) # CRASHES HERE!!!!!!!!
self.optimizer.apply_gradients(zip(gradients, self.model.trainable_weights))
class DatasetLoader(object):
[...]
def prepare(self, datasets_path="./data", skip_train=False, shuffle=True, distribute_strategy=None):
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.DATA
train_dataset = None if skip_train else self._load_dataset(datasets_path, "trainA", "trainB", options, training=True, shuffle=shuffle)
portrait_test_dataset = self._load_dataset(datasets_path, "testPortraitA", "testPortraitB", options, training=False, shuffle=shuffle)
video_test_dataset = self._load_dataset(datasets_path, "testVideoA", "testVideoB", options, training=False, shuffle=shuffle)
test_datasets_dict = {"portrait": portrait_test_dataset, "video": video_test_dataset}
if distribute_strategy is not None:
train_dataset = distribute_strategy.experimental_distribute_dataset(train_dataset)
for key in test_datasets_dict:
test_datasets_dict[key] = distribute_strategy.experimental_distribute_dataset(test_datasets_dict[key])
return train_dataset, test_datasets_dict
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
在帖子末尾编写代码时,我尝试了一些我以前从未考虑过的微小更改,并随机发现了罪魁祸首。
@tf.function
run_train_step
函数上方的装饰器正在引起问题!我认为我在实施分布策略更改时错误地添加了它。删除它后,我能够成功进行培训。但是,TensorFlow打印了两种策略的警告...While writing the code at the end of my post, I tried some minor changes I haven't thought about before and randomly found the culprit. The
@tf.function
decorator above therun_train_step
function was causing the issue! I think I added it by mistake while implementing the distribute strategy changes. After I removed it, I was able to successfully run the training. However, Tensorflow prints this kind of warning for both strategies...