返回介绍

parl.Agent

发布于 2024-06-23 17:58:49 字数 6516 浏览 0 评论 0 收藏 0

class Agent(algorithm)[源代码]
alias: parl.Agent alias: parl.core.paddle.agent.Agent Agent is one of the three basic classes of PARL. It is responsible for interacting with the environment and collecting

data for training the policy. | To implement a customized Agent, users can:

import parl

class MyAgent(parl.Agent):
    def __init__(self, algorithm, act_dim):
        super(MyAgent, self).__init__(algorithm)
        self.act_dim = act_dim
变量:

alg (parl.algorithm) – algorithm of this agent.

Public Functions:
  • sample: return a noisy action to perform exploration according to the policy.

  • predict: return an action given current observation.

  • learn: update the parameters of self.alg using the learn_program defined in build_program().

  • save: save parameters of the agent to a given path.

  • restore: restore previous saved parameters from a given path.

  • train: set the agent in training mode.

  • eval: set the agent in evaluation mode.

__init__(algorithm)[源代码]
参数:

algorithm (parl.Algorithm) – an instance of parl.Algorithm. This algorithm is then passed to self.alg.

eval()[源代码]

Sets the agent in evaluation mode.

learn(*args, **kwargs)[源代码]

The training interface for Agent.

predict(*args, **kwargs)[源代码]

Predict an action when given the observation of the environment.

restore(save_path, model=None)[源代码]

Restore previously saved parameters. This method requires a program that describes the network structure. The save_path argument is typically a value previously passed to save_params().

参数:
  • save_path (str) – path where parameters were previously saved.

  • model (parl.Model) – model that describes the neural network structure. If None, will use self.alg.model.

抛出:

ValueError – if program is None and self.learn_program does not exist.

Example:

agent = AtariAgent()
agent.save('./model_dir')
agent.restore('./model_dir')
sample(*args, **kwargs)[源代码]

Return an action with noise when given the observation of the environment.

In general, this function is used in train process as noise is added to the action to preform exploration.

save(save_path, model=None)[源代码]

Save parameters.

参数:
  • save_path (str) – where to save the parameters.

  • model (parl.Model) – model that describes the neural network structure. If None, will use self.alg.model.

Example:

agent = AtariAgent()
agent.save('./model_dir')
save_inference_model(save_path, input_shape_list, input_dtype_list, model=None)[源代码]

Saves input Layer or function as paddle.jit.TranslatedLayer format model, which can be used for inference.

参数:
  • save_path (str) – where to save the inference_model.

  • model (parl.Model) – model that describes the policy network structure. If None, will use self.alg.model.

  • input_shape_list (list) – shape of all inputs of the saved model’s forward method.

  • input_dtype_list (list) – dtype of all inputs of the saved model’s forward method.

Example:

agent = AtariAgent()
agent.save_inference_model('./inference_model_dir', [[None, 128]], ['float32'])

Example with actor-critic:

agent = AtariAgent()
agent.save_inference_model('./inference_ac_model_dir', [[None, 128]], ['float32'], agent.alg.model.actor_model)
train()[源代码]

Sets the agent in training mode, which is the default setting. Model of agent will be affected if it has some modules (e.g. Dropout, BatchNorm) that behave differently in train/evaluation mode.

Example:

agent.train()   # default setting
assert (agent.training is True)
agent.eval()
assert (agent.training is False)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文