这是我第一次使用Pytorch和Pytorch Geometric。我正在尝试使用Pytorch Geometric创建一个简单的图神经网络。我通过遵循Pytorch Geometric的文档并扩展InMemoryDataset来自定义数据集。之后,我将数据集拆分为训练、验证和测试数据集,分别大小为(3496, 437, 439)。这些是每个数据集中图的数量。下面是我的简单神经网络代码
class Net(torch.nn.Module):def __init__(self): super(Net, self).__init__() self.conv1 = GCNConv(dataset.num_node_features, 10) self.conv2 = GCNConv(10, dataset.num_classes)def forward(self, data): x, edge_index, batch = data.x, data.edge_index, data.batch x = self.conv1(x, edge_index) x = F.relu(x) x = F.dropout(x, training=self.training) x = self.conv2(x, edge_index) return F.log_softmax(x, dim=1)
在训练模型时,我遇到了这个错误,这表明我的输入维度存在一些问题。可能是由于我的批次大小导致的?
RuntimeError: The following operation failed in the TorchScript interpreter.Traceback of TorchScript (most recent call last):File "E:\Users\abc\Anaconda3\lib\site-packages\torch_scatter\scatter.py", line 22, in scatter_add size[dim] = int(index.max()) + 1 out = torch.zeros(size, dtype=src.dtype, device=src.device) return out.scatter_add_(dim, index, src) ~~~~~~~~~~~~~~~~ <--- HEREelse: return out.scatter_add_(dim, index, src)RuntimeError: index 13654 is out of bounds for dimension 0 with size 678
错误具体发生在神经网络的这行代码上,
x = self.conv1(x, edge_index)
编辑:增加了有关edge_index的更多信息,并更详细地解释了我使用的数据。
以下是我尝试传递的变量的形状
x: torch.Size([678, 43])edge_index: torch.Size([2, 668])torch.max(edge_index): tensor(541690)torch.min(edge_index): tensor(1920)
我使用了一个包含Data(x=node_features, edge_index=edge_index, y=labels)
对象的数据列表。在将数据集拆分为训练、验证和测试数据集时,我分别得到了每个数据集中的(3496, 437, 439)
个图。最初我尝试从我的数据集中创建一个单一的图,但我不知道它如何与Dataloader
和小批次一起工作。
train_loader = DataLoader(train_dataset, batch_size=batch_size)val_loader = DataLoader(val_dataset, batch_size=batch_size)test_loader = DataLoader(test_dataset, batch_size=batch_size)
以下是从数据框生成图的代码。我尝试创建一个简单的图,其中只有一些顶点和一些连接它们的边。我可能忽略了一些东西,这就是我遇到这个问题的原因。我在创建这个图时尝试遵循Pytorch Geometric的文档(Pytorch Geometric: Creating your own dataset)
def process(self): data_list = [] grouped = df.groupby('EntityId') for id, group in grouped: node_features = torch.tensor(group.drop(['Labels'], axis=1).values) source_nodes = group.index[1:].values target_nodes = group.index[:-1].values labels = torch.tensor(group.Labels.values) edge_index = torch.tensor([source_nodes, target_nodes]) data = Data(x=node_features, edge_index=edge_index, y=labels) data_list.append(data) if self.pre_filter is not None: data_list = [data for data in data_list if self.pre_filter(data)] if self.pre_transform is not None: data_list = [self.pre_transform(data) for data in data_list] data, slices = self.collate(data_list) torch.save((data, slices), self.processed_paths[0])
如果有人能帮助我处理在任何类型的数据上创建图并使用GCNConv,我将不胜感激。
回答:
我同意@*** — 这是一个数据问题。你的edge_index
应该引用数据节点,其max
值不应该那么高。因为你不想展示你的数据并要求“在任何类型的数据上创建图”,所以这里是解决方案。
我基本上没有改变你的Net
。你可以根据你的数据调整所述的常量。
import torchimport torch.nn as nnimport torch.nn.functional as Ffrom torch_geometric.nn import GCNConvfrom torch_geometric.data import Datanum_node_features = 100num_classes = 2num_nodes = 678num_edges = 1500num_hidden_nodes = 128x = torch.randn((num_nodes, num_node_features), dtype=torch.float32)edge_index = torch.randint(low=0, high=num_nodes, size=(2, num_edges), dtype=torch.long)y = torch.randint(low=0, high=num_classes, size=(num_nodes,), dtype=torch.long)class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = GCNConv(num_node_features, num_hidden_nodes) self.conv2 = GCNConv(num_hidden_nodes, num_classes) def forward(self, data): x, edge_index = data.x, data.edge_index x = self.conv1(x, edge_index) x = F.relu(x) x = F.dropout(x, training=self.training) x = self.conv2(x, edge_index) return F.log_softmax(x, dim=1)data = Data(x=x, edge_index=edge_index, y=y)net = Net()optimizer = torch.optim.Adam(net.parameters(), lr=1e-2)for i in range(1000): output = net(data) loss = F.cross_entropy(output, data.y) optimizer.zero_grad() loss.backward() optimizer.step() if i % 100 == 0: print('Accuracy: ', (torch.argmax(output, dim=1)==data.y).float().mean())
输出
Accuracy: tensor(0.5059)Accuracy: tensor(0.8702)Accuracy: tensor(0.9159)Accuracy: tensor(0.9233)Accuracy: tensor(0.9336)Accuracy: tensor(0.9484)Accuracy: tensor(0.9602)Accuracy: tensor(0.9676)Accuracy: tensor(0.9705)Accuracy: tensor(0.9749)
(是的,我们可以对随机数据过拟合)