有效地使用密集层
我需要在大小为n的数据集中实现一层,其中每个示例具有一组独立的特征(每个功能由dimension l的张量表示)。我想平行训练密集的层,然后将输出的张量加成。
我可以实现以下循环的图层:
class MyParallelDenseLayer(tf.keras.layers.Layer):
def __init__(self, dense_kwargs, **kwargs):
super().__init__(**kwargs)
self.dense_kwargs = dense_kwargs
def build(self, input_shape):
self.N, self.M, self.L = input_shape
self.list_dense_layers = [tf.keras.layers.Dense(**self.dense_kwargs) for a_m in range(self.M)]
super().build(input_shape)
def call(self, inputs):
parallel_output = [self.list_dense_layers[i](inputs[:, i]) for i in range(self.M)]
return tf.keras.layers.Concatenate()(parallel_output)
但是“呼叫”功能中的for循环使我的图层非常慢。 是否有更快的方法可以执行此层?
I need to implement a layer in Tensorflow for a dataset of size N where each sample has a set of M independent features (each feature is represented by a tensor of dimension L). I want to train M dense layers in parallel, then concatenate the outputted tensors.
I could implement a layer using for loop as below:
class MyParallelDenseLayer(tf.keras.layers.Layer):
def __init__(self, dense_kwargs, **kwargs):
super().__init__(**kwargs)
self.dense_kwargs = dense_kwargs
def build(self, input_shape):
self.N, self.M, self.L = input_shape
self.list_dense_layers = [tf.keras.layers.Dense(**self.dense_kwargs) for a_m in range(self.M)]
super().build(input_shape)
def call(self, inputs):
parallel_output = [self.list_dense_layers[i](inputs[:, i]) for i in range(self.M)]
return tf.keras.layers.Concatenate()(parallel_output)
But the for loop in the 'call' function makes my layer extremely slow.
Is there a faster way to do this layer?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
这应该使用
einsum
可行。将此层扩展到您喜欢激活功能和Whatot的喜好。测试它:
This should be doable using
einsum
. Expand this layer to your liking with activation functions and whatnot.Test it: