当我在没有参数的情况下定义自己的呼叫功能时,为什么要使用模型(x,triagh = true)?

发布于 2025-02-09 13:47:34 字数 1385 浏览 2 评论 0原文

请注意,当我创建模型时,我用参数= false定义了呼叫函数,当我在function train_step中使用该模型时,我将“某物= true,training = true”放入我的呼叫中,但在我的呼叫中未定义,但是在默认的tf.keras.model调用中。

为什么我可以没有错误执行此操作?输出基本上打印了一堆我的电话。

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

# Add a channels dimension
x_train = x_train[..., tf.newaxis].astype("float32")
x_test = x_test[..., tf.newaxis].astype("float32")

train_ds = tf.data.Dataset.from_tensor_slices(
    (x_train, y_train)).shuffle(10000).batch(32)

class MyModel(Model):
  def __init__(self):
    super(MyModel, self).__init__()
    self.fl = Flatten()
    self.d = Dense(10)
  
  ######My problem#######
  def call(self, x, something=False):
    if something:
      tf.print('my call')
    x = self.fl(x)
    return self.d(x)


model = MyModel()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()

@tf.function
def train_step(X,Y):
  with tf.GradientTape() as tape:
    ######My problem#######
    predictions = model(X, something =True, training = True)
    loss = loss_object(Y, predictions)
  gradients = tape.gradient(loss, model.trainable_variables)
  optimizer.apply_gradients(zip(gradients, model.trainable_variables))

for epoch in range(3):

  for X,Y in train_ds:
    train_step(X,Y)

Notice when I created my model, I defined the call function with argument something = False, when I used the model in function train_step, I put in "something =True, training = True", training is not defined in my call, but it is in the default tf.keras.model call.

Why am I able to execute this with no error? And the output basically prints a bunch of 'my call's.

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

# Add a channels dimension
x_train = x_train[..., tf.newaxis].astype("float32")
x_test = x_test[..., tf.newaxis].astype("float32")

train_ds = tf.data.Dataset.from_tensor_slices(
    (x_train, y_train)).shuffle(10000).batch(32)

class MyModel(Model):
  def __init__(self):
    super(MyModel, self).__init__()
    self.fl = Flatten()
    self.d = Dense(10)
  
  ######My problem#######
  def call(self, x, something=False):
    if something:
      tf.print('my call')
    x = self.fl(x)
    return self.d(x)


model = MyModel()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()

@tf.function
def train_step(X,Y):
  with tf.GradientTape() as tape:
    ######My problem#######
    predictions = model(X, something =True, training = True)
    loss = loss_object(Y, predictions)
  gradients = tape.gradient(loss, model.trainable_variables)
  optimizer.apply_gradients(zip(gradients, model.trainable_variables))

for epoch in range(3):

  for X,Y in train_ds:
    train_step(X,Y)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

顾挽 2025-02-16 13:47:34

在模型类中, call方法文档文档:

要在输入上调用模型,请始终使用__调用__()̀方法,即
模型(输入),依赖于基础呼叫()方法。

实际上,__调用__可以采用任何输入参数:def __call __(self, *args,** kwargs):在模型类源代码中

您可以找到一个更详细的答案<

In the Model class, the call method documentation :

To call a model on an input, always use the __call__()̀ method, i.e.
model(inputs), which relies on the underlying call() method.

And indeed, the __call__ can take any input argument : def __call__(self, *args, **kwargs): (in Model class source code)

You can find a more detailed answer here

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文