迭代损失功能自动编码器

发布于 2025-02-08 16:03:14 字数 1239 浏览 3 评论 0原文

我正在尝试在Pytorch AutoCododer中实现自定义损失功能。

损耗函数试图最大化给定输出张量U(vector)和100个随机向量J之间的余弦相似性,其中u和j都具有[300]的相同维度。每批重复这一点。

假设我们每批有30个项目,然后输出张量为

train_y.y.hape = [30,300]

rando_vectors.shape = [30,100,300]

我可以以两种方式实现损失函数:

    All_Y =[]
    for Y,z_r in zip(train_y, random_vectors):
        Y_cosine_list =[]
        for z in z_r:
            cosi = torch.dot(Y,z) / (torch.norm(Y)*torch.norm(z))
            Y_cosine_list.append(cosi)
        All_Y.append(Y_cosine_list)
    
    All_Y = torch.tensor(All_Y).to(device)
    train_loss = torch.sum(torch.abs(All_Y))/dim_0
    train_loss = torch.tensor(train_loss.data, requires_grad = True)

    train_Y = torch.zeros([dim_0, 100])
    for i, (Y,z_r) in enumerate(zip(train_Y, random_vectors)):
        for j,z in enumerate(z_r):
            train_Y[i,j] = cos(Y,z)
    train_Y = train_Y.to(device)
    
    train_loss = torch.sum(torch.abs(train_Y))/dim_0

第二种方式更优雅并且要点。但是,它给出了“ CUDA非法内存访问错误”。我已经检查了在两种情况下都不超过内存。第二个实施有什么问题吗?

第一个实现是不高的,我不确定从神经网的优化角度来看,这是否有意义。但是它没有给出错误,并且能够完成所有时代的培训。

PS:我尝试以LOSS_FN方法封装此代码块,但我得到了相同的非法内存访问错误。

我已经尝试了所有可以找到的非法内存访问错误 - 更改GPU,删除火炬。堆栈块等。但是我似乎无法摆脱问题。

I am trying to implement a custom loss function in a Pytorch Autoencoder.

The loss function tries to maximize the cosine similarity between a given output tensor U (a vector) and 100 random vectors J where both U and J have the same dimension of [300]. This is repeated for each batch.

Suppose we have 30 items per batch, then the output tensor is

train_Y.shape = [30,300]

Random_vectors.shape = [30,100,300]

I can implement the loss function in two ways:

    All_Y =[]
    for Y,z_r in zip(train_y, random_vectors):
        Y_cosine_list =[]
        for z in z_r:
            cosi = torch.dot(Y,z) / (torch.norm(Y)*torch.norm(z))
            Y_cosine_list.append(cosi)
        All_Y.append(Y_cosine_list)
    
    All_Y = torch.tensor(All_Y).to(device)
    train_loss = torch.sum(torch.abs(All_Y))/dim_0
    train_loss = torch.tensor(train_loss.data, requires_grad = True)

or

    train_Y = torch.zeros([dim_0, 100])
    for i, (Y,z_r) in enumerate(zip(train_Y, random_vectors)):
        for j,z in enumerate(z_r):
            train_Y[i,j] = cos(Y,z)
    train_Y = train_Y.to(device)
    
    train_loss = torch.sum(torch.abs(train_Y))/dim_0

The second one is more elegant and to the point. However it is giving a "Cuda illegal memory access error". I have checked that the memory is not exceeded in either case. Is there anything wrong with the second implementation?

The first implementation is inelegant and I am not sure that it makes sense from a neural net optimization perspective. But it does not give errors and am able to complete training for all the epochs.

Ps: I have tried encapsulating this code block in a loss_fn method but I get the same illegal memory access error.

I have tried everything that I could find for the illegal memory access error - changing GPUs, removing a torch.stack block etc. But I can't seem to get rid of the problem.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

心安伴我暖 2025-02-15 16:03:14

这是一种矢量化的方法来进行

class CosineLoss(nn.Module):
    def __init__(self, ):
        super().__init__()
        pass

    def forward(self, x, y):
        """
        Args:
            x (torch.tensor): [batchsize, N, M] - tensor.
            y (torch.tensor): [batchsize, M] - tensor.

        Returns:
            torch.tensor: scalar mean cosine loss
        """
        # dot product along dimension 'm' i.e multiply and sum along 'm'.
        dotp = torch.einsum("bm, bnm -> bn", y, x)
        # L2 norm along dimension 'm' and multiply by broadcasting
        length = torch.norm(y, dim=-1)[:, None]*torch.norm(x, dim=-1)
        # cosine = dotproduct of unit vectors
        cos = dotp/length
        return cos.mean()


def test():
    b, n, m = 30, 100, 300
    train_Y = torch.randn(b, m, device='cuda')
    random_vectors = torch.randn(b, n, m, requires_grad=True, device='cuda')
    print(f'{random_vectors.grad = }')

    cosineloss = CosineLoss()
    loss = cosineloss(random_vectors, train_Y)
    print(f'{loss = }')

    loss.backward()
    print(f'{random_vectors.grad.shape = }')

引用:

  1. einsum einsum
  2. =“ https://pytorch.org/docs/stable/notes/broadcasting.html” rel =“ nofollow noreferrer”> broadcasting

Here is a vectorized way to do it

class CosineLoss(nn.Module):
    def __init__(self, ):
        super().__init__()
        pass

    def forward(self, x, y):
        """
        Args:
            x (torch.tensor): [batchsize, N, M] - tensor.
            y (torch.tensor): [batchsize, M] - tensor.

        Returns:
            torch.tensor: scalar mean cosine loss
        """
        # dot product along dimension 'm' i.e multiply and sum along 'm'.
        dotp = torch.einsum("bm, bnm -> bn", y, x)
        # L2 norm along dimension 'm' and multiply by broadcasting
        length = torch.norm(y, dim=-1)[:, None]*torch.norm(x, dim=-1)
        # cosine = dotproduct of unit vectors
        cos = dotp/length
        return cos.mean()


def test():
    b, n, m = 30, 100, 300
    train_Y = torch.randn(b, m, device='cuda')
    random_vectors = torch.randn(b, n, m, requires_grad=True, device='cuda')
    print(f'{random_vectors.grad = }')

    cosineloss = CosineLoss()
    loss = cosineloss(random_vectors, train_Y)
    print(f'{loss = }')

    loss.backward()
    print(f'{random_vectors.grad.shape = }')

References:

  1. einsum
  2. broadcasting
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文