- 概览
- 安装
- 教程
- 算法接口文档
- 简易高效的并行接口
- APIS
- FREQUENTLY ASKED QUESTIONS
- EVOKIT
- 其他
- parl.algorithms.paddle.policy_gradient
- parl.algorithms.paddle.dqn
- parl.algorithms.paddle.ddpg
- parl.algorithms.paddle.ddqn
- parl.algorithms.paddle.oac
- parl.algorithms.paddle.a2c
- parl.algorithms.paddle.qmix
- parl.algorithms.paddle.td3
- parl.algorithms.paddle.sac
- parl.algorithms.paddle.ppo
- parl.algorithms.paddle.maddpg
- parl.core.paddle.model
- parl.core.paddle.algorithm
- parl.remote.remote_decorator
- parl.core.paddle.agent
- parl.remote.client
parl.Agent
- class Agent(algorithm)[源代码]¶
- alias:
parl.Agent
alias:parl.core.paddle.agent.Agent
Agent is one of the three basic classes of PARL. It is responsible for interacting with the environment and collectingdata for training the policy. | To implement a customized
Agent
, users can:import parl class MyAgent(parl.Agent): def __init__(self, algorithm, act_dim): super(MyAgent, self).__init__(algorithm) self.act_dim = act_dim
- 变量:
alg (parl.algorithm) – algorithm of this agent.
- Public Functions:
sample
: return a noisy action to perform exploration according to the policy.predict
: return an action given current observation.learn
: update the parameters of self.alg using the learn_program defined in build_program().save
: save parameters of theagent
to a given path.restore
: restore previous saved parameters from a given path.train
: set the agent in training mode.eval
: set the agent in evaluation mode.
- __init__(algorithm)[源代码]¶
- 参数:
algorithm (parl.Algorithm) – an instance of parl.Algorithm. This algorithm is then passed to self.alg.
- restore(save_path, model=None)[源代码]¶
Restore previously saved parameters. This method requires a program that describes the network structure. The save_path argument is typically a value previously passed to
save_params()
.- 参数:
save_path (str) – path where parameters were previously saved.
model (parl.Model) – model that describes the neural network structure. If None, will use self.alg.model.
- 抛出:
ValueError – if program is None and self.learn_program does not exist.
Example:
agent = AtariAgent() agent.save('./model_dir') agent.restore('./model_dir')
- sample(*args, **kwargs)[源代码]¶
Return an action with noise when given the observation of the environment.
In general, this function is used in train process as noise is added to the action to preform exploration.
- save(save_path, model=None)[源代码]¶
Save parameters.
- 参数:
save_path (str) – where to save the parameters.
model (parl.Model) – model that describes the neural network structure. If None, will use self.alg.model.
Example:
agent = AtariAgent() agent.save('./model_dir')
- save_inference_model(save_path, input_shape_list, input_dtype_list, model=None)[源代码]¶
Saves input Layer or function as
paddle.jit.TranslatedLayer
format model, which can be used for inference.- 参数:
save_path (str) – where to save the inference_model.
model (parl.Model) – model that describes the policy network structure. If None, will use self.alg.model.
input_shape_list (list) – shape of all inputs of the saved model’s forward method.
input_dtype_list (list) – dtype of all inputs of the saved model’s forward method.
Example:
agent = AtariAgent() agent.save_inference_model('./inference_model_dir', [[None, 128]], ['float32'])
Example with actor-critic:
agent = AtariAgent() agent.save_inference_model('./inference_ac_model_dir', [[None, 128]], ['float32'], agent.alg.model.actor_model)
- train()[源代码]¶
Sets the agent in training mode, which is the default setting. Model of agent will be affected if it has some modules (e.g. Dropout, BatchNorm) that behave differently in train/evaluation mode.
Example:
agent.train() # default setting assert (agent.training is True) agent.eval() assert (agent.training is False)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论