那些Keras和Pytorch片段等效吗?
我想知道我是否成功地将Pytorch的以下定义转换为Keras?
在Pytorch中,定义了以下多层感知:
from torch import nn
hidden = 128
def mlp(size_in, size_out, act=nn.ReLU):
return nn.Sequential(
nn.Linear(size_in, hidden),
act(),
nn.Linear(hidden, hidden),
act(),
nn.Linear(hidden, hidden),
act(),
nn.Linear(hidden, size_out),
)
我的翻译是
from tensorflow import keras
from keras import layers
hidden = 128
def mlp(size_in, size_out, act=keras.layers.ReLU):
return keras.Sequential(
[
layers.Dense(hidden, activation=None, name="layer1", input_shape=(size_in, 1)),
act(),
layers.Dense(hidden, activation=None, name="layer2", input_shape=(hidden, 1)),
act(),
layers.Dense(hidden, activation=None, name="layer3", input_shape=(hidden, 1)),
act(),
layers.Dense(size_out, activation=None, name="layer4", input_shape=(hidden, 1))
])
我对输入/输出参数特别困惑,因为这似乎是Tensorflow和Pytorch不同的地方。
来自 documentation> documentation :
当传递流行的Kwarg Input_shape时,Keras将创建一个 输入层要在当前层之前插入。这可以对待 等效于明确定义输入器。
所以,我做对了吗?
I am wondering if I succeeded in translating the following definition in PyTorch to Keras?
In PyTorch, the following multi-layer perceptron was defined:
from torch import nn
hidden = 128
def mlp(size_in, size_out, act=nn.ReLU):
return nn.Sequential(
nn.Linear(size_in, hidden),
act(),
nn.Linear(hidden, hidden),
act(),
nn.Linear(hidden, hidden),
act(),
nn.Linear(hidden, size_out),
)
My translation is
from tensorflow import keras
from keras import layers
hidden = 128
def mlp(size_in, size_out, act=keras.layers.ReLU):
return keras.Sequential(
[
layers.Dense(hidden, activation=None, name="layer1", input_shape=(size_in, 1)),
act(),
layers.Dense(hidden, activation=None, name="layer2", input_shape=(hidden, 1)),
act(),
layers.Dense(hidden, activation=None, name="layer3", input_shape=(hidden, 1)),
act(),
layers.Dense(size_out, activation=None, name="layer4", input_shape=(hidden, 1))
])
I am particularly confused about the input/output arguments, because that seems to be where tensorflow and PyTorch differ.
From the documentation:
When a popular kwarg input_shape is passed, then keras will create an
input layer to insert before the current layer. This can be treated
equivalent to explicitly defining an InputLayer.
So, did I get it right?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
在keras中,您可以为第一层提供
input_shape
,或者使用tf.keras.layers.input
图层。如果您不提供这些详细信息中的任何一个,则在您第一次调用fit
,eval
或预测
时构建模型,或第一个时间您在某些输入数据上调用模型。因此,如果您不提供输入形状,则实际上将被推断出来。请参阅 docs> docs ,有关更多详细信息。 Pytorch通常在运行时侵入输入形状。您可以比较他们的摘要。
keras:
for pytorch:
In Keras, you can provide an
input_shape
for the first layer or alternatively use thetf.keras.layers.Input
layer. If you do not provide either of these details, the model gets built the first time you callfit
,eval
, orpredict
, or the first time you call the model on some input data. So the input shape will actually be inferred if you do not provide it. See the docs for more details. PyTorch generally infers the input shape at runtime.You can compare their summary.
For Keras:
For PyTorch: