mini_batch中的不同节点数
我对图神经网络相当陌生,我正在使用自注意力训练 GNN 模型,我有几个问题。
问题是我的节点数和 node_num 在每个批次中都不同,因此在第一批中我有:
Batch(batch=[1181], edge_attr=[1975, 3], edge_index=[2, 1975], x=[1181, 300])
在第二批中我有: batch=[1134], edge_attr=[1635, 3], edge_index=[2, 1635], x=[1134, 300]
在batch1中有1181个节点,而batch2中有1134个节点
当我尝试计算节点之间的自注意力时,我遇到了以下问题
这是自我注意的工作原理
Q、W、K 计算如下: 在此输入图片描述
wq、wk、wv 的尺寸为
self.w_1 = Param(torch .Tensor(self.nodes_num, self.nodes_num))
所以我遇到的问题是
在batch1中,wq、wk、wv的维度是self.w_q =参数(火炬.张量(1181, 1181)) 在batch2中,wq、wk、wv的维度为self.w_q = Param(torch.Tensor(1134, 1134)) 维度随着节点数量的变化而变化,导致w_q不断被重新定义,
这是否相当于模型只使用一批样本?
如果是这样,我该如何解决这个问题?
I am fairly new to graph neural networks and I am training a GNN model using self attention and I have a few questions.
The question is my node count and node_num differs in each batch such that in the first batch I have:
Batch(batch=[1181], edge_attr=[1975, 3], edge_index=[2, 1975], x=[1181, 300])
in the second batch I have:
batch=[1134], edge_attr=[1635, 3], edge_index=[2, 1635], x=[1134, 300]
There were 1181 nodes in batch1, whereas 1134 nodes in batch2
When I tried to calculate self attention between nodes, I encountered the following problem
Here's how self attention works
the Q, W, K calculate as follows:
enter image description here
the dimension of wq, wk, wv is
self.w_1 = Param(torch.Tensor(self.nodes_num, self.nodes_num))
So the problem I have is this
in batch1 , the the dimension of wq, wk, wv is self.w_q = Param(torch.Tensor(1181, 1181))
in batch2 , the the dimension of wq, wk, wv is self.w_q = Param(torch.Tensor(1134, 1134))
Dimensions vary with the number of nodes, causing w_q to be constantly redefined
Is this equivalent to using only one batch of samples for the model?
If so, how can I solve the problem?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论