keras:attributeError:' adam'对象没有属性' name'

发布于 01-21 17:46 字数 1336 浏览 3 评论 0原文

我想编译我的DQN代理,但我会发现错误: attributeError:'adam'对象没有属性'_name'

DQN = buildAgent(model, actions)
DQN.compile(Adam(lr=1e-3), metrics=['mae'])

我尝试添加伪造_name ,但它不起作用,我正在关注教程,并且可以在Tutor上使用它机器,这可能是一些新的更新更改,但是如何

在这里解决此问题是我的完整代码:

from keras.layers import Dense, Flatten
import gym
from keras.optimizer_v1 import Adam
from rl.agents.dqn import DQNAgent
from rl.policy import BoltzmannQPolicy
from rl.memory import SequentialMemory

env = gym.make('CartPole-v0')
states = env.observation_space.shape[0]
actions = env.action_space.n

episodes = 10

def buildModel(statez, actiones):
    model = Sequential()
    model.add(Flatten(input_shape=(1, statez)))
    model.add(Dense(24, activation='relu'))
    model.add(Dense(24, activation='relu'))
    model.add(Dense(actiones, activation='linear'))
    return model

model = buildModel(states, actions)

def buildAgent(modell, actionz):
    policy = BoltzmannQPolicy()
    memory = SequentialMemory(limit=50000, window_length=1)
    dqn = DQNAgent(model=modell, memory=memory, policy=policy, nb_actions=actionz, nb_steps_warmup=10, target_model_update=1e-2)
    return dqn

DQN = buildAgent(model, actions)
DQN.compile(Adam(lr=1e-3), metrics=['mae'])
DQN.fit(env, nb_steps=50000, visualize=False, verbose=1)

I want to compile my DQN Agent but I get error:
AttributeError: 'Adam' object has no attribute '_name',

DQN = buildAgent(model, actions)
DQN.compile(Adam(lr=1e-3), metrics=['mae'])

I tried adding fake _name but it doesn't work, I'm following a tutorial and it works on tutor's machine, it's probably some new update change but how to fix this

Here is my full code:

from keras.layers import Dense, Flatten
import gym
from keras.optimizer_v1 import Adam
from rl.agents.dqn import DQNAgent
from rl.policy import BoltzmannQPolicy
from rl.memory import SequentialMemory

env = gym.make('CartPole-v0')
states = env.observation_space.shape[0]
actions = env.action_space.n

episodes = 10

def buildModel(statez, actiones):
    model = Sequential()
    model.add(Flatten(input_shape=(1, statez)))
    model.add(Dense(24, activation='relu'))
    model.add(Dense(24, activation='relu'))
    model.add(Dense(actiones, activation='linear'))
    return model

model = buildModel(states, actions)

def buildAgent(modell, actionz):
    policy = BoltzmannQPolicy()
    memory = SequentialMemory(limit=50000, window_length=1)
    dqn = DQNAgent(model=modell, memory=memory, policy=policy, nb_actions=actionz, nb_steps_warmup=10, target_model_update=1e-2)
    return dqn

DQN = buildAgent(model, actions)
DQN.compile(Adam(lr=1e-3), metrics=['mae'])
DQN.fit(env, nb_steps=50000, visualize=False, verbose=1)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

忆梦2025-01-28 17:46:15

我的2美分:使用旧式凯拉斯优化器!

#from tensorflow.keras.optimizers import Adam
from tensorflow.keras.optimizers.legacy import Adam

就我而言,它有效。

My 2 cents: use legacy keras optimizer!

#from tensorflow.keras.optimizers import Adam
from tensorflow.keras.optimizers.legacy import Adam

it works in my case.

踏月而来2025-01-28 17:46:15

您的错误来自adam来自keras.optimizer_v1导入adam,您可以使用tf.keras.optimizers.adam.adam从<<<<<代码> tensorflow&gt; = v2 喜欢下面:

lr参数已弃用,最好使用Learning_rate而不是。)

# !pip install keras-rl2
import tensorflow as tf
from keras.layers import Dense, Flatten
import gym
from rl.agents.dqn import DQNAgent
from rl.policy import BoltzmannQPolicy
from rl.memory import SequentialMemory

env = gym.make('CartPole-v0')
states = env.observation_space.shape[0]
actions = env.action_space.n
episodes = 10

def buildModel(statez, actiones):
    model = tf.keras.Sequential()
    model.add(Flatten(input_shape=(1, statez)))
    model.add(Dense(24, activation='relu'))
    model.add(Dense(24, activation='relu'))
    model.add(Dense(actiones, activation='linear'))
    return model

def buildAgent(modell, actionz):
    policy = BoltzmannQPolicy()
    memory = SequentialMemory(limit=50000, window_length=1)
    dqn = DQNAgent(model=modell, memory=memory, policy=policy, 
                   nb_actions=actionz, nb_steps_warmup=10, 
                   target_model_update=1e-2)
    return dqn

model = buildModel(states, actions)
DQN = buildAgent(model, actions)
DQN.compile(tf.keras.optimizers.Adam(learning_rate=1e-3), metrics=['mae'])
DQN.fit(env, nb_steps=50000, visualize=False, verbose=1)

Your error came from importing Adam with from keras.optimizer_v1 import Adam, You can solve your problem with tf.keras.optimizers.Adam from TensorFlow >= v2 like below:

(The lr argument is deprecated, it's better to use learning_rate instead.)

# !pip install keras-rl2
import tensorflow as tf
from keras.layers import Dense, Flatten
import gym
from rl.agents.dqn import DQNAgent
from rl.policy import BoltzmannQPolicy
from rl.memory import SequentialMemory

env = gym.make('CartPole-v0')
states = env.observation_space.shape[0]
actions = env.action_space.n
episodes = 10

def buildModel(statez, actiones):
    model = tf.keras.Sequential()
    model.add(Flatten(input_shape=(1, statez)))
    model.add(Dense(24, activation='relu'))
    model.add(Dense(24, activation='relu'))
    model.add(Dense(actiones, activation='linear'))
    return model

def buildAgent(modell, actionz):
    policy = BoltzmannQPolicy()
    memory = SequentialMemory(limit=50000, window_length=1)
    dqn = DQNAgent(model=modell, memory=memory, policy=policy, 
                   nb_actions=actionz, nb_steps_warmup=10, 
                   target_model_update=1e-2)
    return dqn

model = buildModel(states, actions)
DQN = buildAgent(model, actions)
DQN.compile(tf.keras.optimizers.Adam(learning_rate=1e-3), metrics=['mae'])
DQN.fit(env, nb_steps=50000, visualize=False, verbose=1)
尐籹人2025-01-28 17:46:15

#pip install keras == 2.11.0

#pip install tensorflow == 2.11.0

dqn.compile(tf.keras.optimizers.legacy.adam(learning_rate = 1e-3),量表= ['mae''])

#pip install keras==2.11.0

#pip install tensorflow==2.11.0

dqn.compile(tf.keras.optimizers.legacy.Adam(learning_rate=1e-3), metrics=['mae'])

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文