DQN Pytorch损失持续增加

我正在使用pytorch实现一个简单的DQN算法,以解决gym中的CartPole环境。我已经调试了一段时间,但无法找出模型为何无法学习的原因。

观察:

  • 使用SmoothL1Loss的表现比MSEloss差,但两种损失都在增加
  • Adam中使用较小的LR不起作用,我已经测试了0.0001、0.00025、0.0005和默认值

笔记:

  • 我已经单独调试了算法的各个部分,可以相当有信心地说问题出在learn函数中。我在想这个错误是否是因为我误解了detach在pytorch中的用法,或者是我在使用框架时犯了其他错误。
  • 我尽量严格按照原始论文(见上文链接)来实现

参考资料:

import torch as Timport torch.nn as nnimport torch.nn.functional as Fimport gymimport numpy as npclass ReplayBuffer:    def __init__(self, mem_size, input_shape, output_shape):        self.mem_counter = 0        self.mem_size = mem_size        self.input_shape = input_shape        self.actions = np.zeros(mem_size)        self.states = np.zeros((mem_size, *input_shape))        self.states_ = np.zeros((mem_size, *input_shape))        self.rewards = np.zeros(mem_size)        self.terminals = np.zeros(mem_size)    def sample(self, batch_size):        indices = np.random.choice(self.mem_size, batch_size)        return self.actions[indices], self.states[indices], \            self.states_[indices], self.rewards[indices], \            self.terminals[indices]    def store(self, action, state, state_, reward, terminal):        index = self.mem_counter % self.mem_size        self.actions[index] = action        self.states[index] = state        self.states_[index] = state_        self.rewards[index] = reward        self.terminals[index] = terminal        self.mem_counter += 1class DeepQN(nn.Module):    def __init__(self, input_shape, output_shape, hidden_layer_dims):        super(DeepQN, self).__init__()        self.input_shape = input_shape        self.output_shape = output_shape        layers = []        layers.append(nn.Linear(*input_shape, hidden_layer_dims[0]))        for index, dim in enumerate(hidden_layer_dims[1:]):            layers.append(nn.Linear(hidden_layer_dims[index], dim))        layers.append(nn.Linear(hidden_layer_dims[-1], *output_shape))        self.layers = nn.ModuleList(layers)        self.loss = nn.MSELoss()        self.optimizer = T.optim.Adam(self.parameters())    def forward(self, states):        for layer in self.layers[:-1]:            states = F.relu(layer(states))        return self.layers[-1](states)    def learn(self, predictions, targets):        self.optimizer.zero_grad()        loss = self.loss(input=predictions, target=targets)        loss.backward()        self.optimizer.step()        return lossclass Agent:    def __init__(self, epsilon, gamma, input_shape, output_shape):        self.input_shape = input_shape        self.output_shape = output_shape        self.epsilon = epsilon        self.gamma = gamma        self.q_eval = DeepQN(input_shape, output_shape, [64])        self.memory = ReplayBuffer(10000, input_shape, output_shape)        self.batch_size = 32        self.learn_step = 0    def move(self, state):        if np.random.random() < self.epsilon:            return np.random.choice(*self.output_shape)        else:            self.q_eval.eval()            state = T.tensor([state]).float()            action = self.q_eval(state).max(axis=1)[1]            return action.item()    def sample(self):        actions, states, states_, rewards, terminals = \            self.memory.sample(self.batch_size)        actions = T.tensor(actions).long()        states = T.tensor(states).float()        states_ = T.tensor(states_).float()        rewards = T.tensor(rewards).view(self.batch_size).float()        terminals = T.tensor(terminals).view(self.batch_size).long()        return actions, states, states_, rewards, terminals    def learn(self, state, action, state_, reward, done):        self.memory.store(action, state, state_, reward, done)        if self.memory.mem_counter < self.batch_size:            return        self.q_eval.train()        self.learn_step += 1        actions, states, states_, rewards, terminals = self.sample()        indices = np.arange(self.batch_size)        q_eval = self.q_eval(states)[indices, actions]        q_next = self.q_eval(states_).detach()        q_target = rewards + self.gamma * q_next.max(axis=1)[0] * (1 - terminals)        loss = self.q_eval.learn(q_eval, q_target)        self.epsilon *= 0.9 if self.epsilon > 0.1 else 1.0        return loss.item()def learn(env, agent, episodes=500):    print('Episode: Mean Reward: Last Loss: Mean Step')    rewards = []    losses = [0]    steps = []    num_episodes = episodes    for episode in range(num_episodes):        done = False        state = env.reset()        total_reward = 0        n_steps = 0        while not done:            action = agent.move(state)            state_, reward, done, _ = env.step(action)            loss = agent.learn(state, action, state_, reward, done)            state = state_            total_reward += reward            n_steps += 1            if loss:                losses.append(loss)        rewards.append(total_reward)        steps.append(n_steps)        if episode % (episodes // 10) == 0 and episode != 0:            print(f'{episode:5d} : {np.mean(rewards):5.2f} '                  f': {np.mean(losses):5.2f}: {np.mean(steps):5.2f}')            rewards = []            losses = [0]            steps = []    print(f'{episode:5d} : {np.mean(rewards):5.2f} '          f': {np.mean(losses):5.2f}: {np.mean(steps):5.2f}')    return losses, rewardsif __name__ == '__main__':    env = gym.make('CartPole-v1')    agent = Agent(1.0, 1.0,                  env.observation_space.shape,                  [env.action_space.n])    learn(env, agent, 500)

回答:

我认为主要问题在于折扣因子,即gamma。你将其设置为1.0,这意味着你对未来的奖励和当前的奖励赋予了相同的权重。在强化学习中,我们通常更关心即时奖励而不是未来的奖励,因此gamma应该始终小于1。

为了尝试,我将gamma设置为0.99,并运行了你的代码:

Episode: Mean Reward: Last Loss: Mean Step  100 : 34.80 :  0.34: 34.80  200 : 40.42 :  0.63: 40.42  300 : 65.58 :  1.78: 65.58  400 : 212.06 :  9.84: 212.06  500 : 407.79 : 19.49: 407.79

如你所见,损失仍然在增加(即使没有之前那么多),但奖励也在增加。你应该考虑到这里的损失并不是衡量性能的好指标,因为你有一个移动的目标。你可以通过使用目标网络来减少目标的不稳定性。通过额外的参数调整和目标网络,可能可以使损失更加稳定。

此外,一般来说,在强化学习中,损失值不如在监督学习中那么重要;损失的减少并不总是意味着性能的提升,反之亦然。

问题在于Q目标在训练步骤中是不断变化的;随着代理的进行,预测正确的奖励总和变得极其困难(例如,探索更多的状态和奖励意味着更高的奖励方差),因此损失增加。在更复杂的环境中(更多的状态、变化的奖励等),这一点更加明显。

与此同时,Q网络在近似每个动作的Q值方面变得越来越好,因此奖励(可能会)增加。

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注