高斯混合模型的参数估计

发布于 2025-01-19 08:17:01 字数 2145 浏览 4 评论 0原文

我正在尝试训练一个模型来估计 GMM。然而,GMM 的均值每次都是根据mean_placement 参数计算的。我按照此处提供的解决方案进行操作,我将复制并粘贴原始代码:

import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets as datasets

import torch
from torch import nn
from torch import optim
import torch.distributions as D

num_layers = 8
weights = torch.ones(8,requires_grad=True)
means = torch.tensor(np.random.randn(8,2),requires_grad=True)
stdevs = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=True)

parameters = [weights, means, stdevs]
optimizer1 = optim.SGD(parameters, lr=0.001, momentum=0.9)

num_iter = 10001
for i in range(num_iter):
    mix = D.Categorical(weights)
    comp = D.Independent(D.Normal(means,stdevs), 1)
    gmm = D.MixtureSameFamily(mix, comp)

    optimizer1.zero_grad()
    x = torch.randn(5000,2)#this can be an arbitrary x samples
    loss2 = -gmm.log_prob(x).mean()#-densityflow.log_prob(inputs=x).mean()
    loss2.backward()
    optimizer1.step()

    print(i, loss2)

我想做的是:

num_layers = 8
weights = torch.ones(8,requires_grad=True)
means_coef = torch.tensor(10.,requires_grad=True)
means = torch.tensor(torch.dstack([torch.linspace(1,means_coef.detach().item(),8)]*2).squeeze(),requires_grad=True)
stdevs = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=True)
parameters = [means_coef]
optimizer1 = optim.SGD(parameters, lr=0.001, momentum=0.9)

num_iter = 10001
for i in range(num_iter):
    means = torch.tensor(torch.dstack([torch.linspace(1,means_coef.detach().item(),8)]*2).squeeze(),requires_grad=True)

    mix = D.Categorical(weights)
    comp = D.Independent(D.Normal(means,stdevs), 1)
    gmm = D.MixtureSameFamily(mix, comp)

    optimizer1.zero_grad()
    x = torch.randn(5000,2)#this can be an arbitrary x samples
    loss2 = -gmm.log_prob(x).mean()#-densityflow.log_prob(inputs=x).mean()
    loss2.backward()
    optimizer1.step()

    print(i, means_coef)
    print(means_coef)


但是在这种情况下,参数不会更新,并且梯度值始终为“无”。有什么想法如何解决这个问题吗?

I am trying to train a model to estimate a GMM. However, the means of the GMM are calculated each time based on a mean_placement parameter. I am following the solution provided here, I'll copy and paste the original code:

import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets as datasets

import torch
from torch import nn
from torch import optim
import torch.distributions as D

num_layers = 8
weights = torch.ones(8,requires_grad=True)
means = torch.tensor(np.random.randn(8,2),requires_grad=True)
stdevs = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=True)

parameters = [weights, means, stdevs]
optimizer1 = optim.SGD(parameters, lr=0.001, momentum=0.9)

num_iter = 10001
for i in range(num_iter):
    mix = D.Categorical(weights)
    comp = D.Independent(D.Normal(means,stdevs), 1)
    gmm = D.MixtureSameFamily(mix, comp)

    optimizer1.zero_grad()
    x = torch.randn(5000,2)#this can be an arbitrary x samples
    loss2 = -gmm.log_prob(x).mean()#-densityflow.log_prob(inputs=x).mean()
    loss2.backward()
    optimizer1.step()

    print(i, loss2)

What I would like to do is this:

num_layers = 8
weights = torch.ones(8,requires_grad=True)
means_coef = torch.tensor(10.,requires_grad=True)
means = torch.tensor(torch.dstack([torch.linspace(1,means_coef.detach().item(),8)]*2).squeeze(),requires_grad=True)
stdevs = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=True)
parameters = [means_coef]
optimizer1 = optim.SGD(parameters, lr=0.001, momentum=0.9)

num_iter = 10001
for i in range(num_iter):
    means = torch.tensor(torch.dstack([torch.linspace(1,means_coef.detach().item(),8)]*2).squeeze(),requires_grad=True)

    mix = D.Categorical(weights)
    comp = D.Independent(D.Normal(means,stdevs), 1)
    gmm = D.MixtureSameFamily(mix, comp)

    optimizer1.zero_grad()
    x = torch.randn(5000,2)#this can be an arbitrary x samples
    loss2 = -gmm.log_prob(x).mean()#-densityflow.log_prob(inputs=x).mean()
    loss2.backward()
    optimizer1.step()

    print(i, means_coef)
    print(means_coef)


However in this case the parameter is not updated and the grad value is always None. Any ideas how to fix this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

云朵有点甜 2025-01-26 08:17:01

根据你的指示我重写了你的模型。
如果运行它,您可以看到模型优化后所有参数都在变化。我还在最后提供了模型的图表。如果您想创建一个新的 GMM 类,您可以根据需要简单地修改 GMM 类。

import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets as datasets

import torch
from torch import nn
from torch import optim
import torch.distributions as D

class GMM(nn.Module):
    
    def __init__(self, weights, base, scale, n_cell=8, shift=0, dim=2):
        super(GMM, self).__init__()
        self.weight = nn.Parameter(weights)
        self.base = nn.Parameter(base)
        self.scale = nn.Parameter(scale)
        self.grid = torch.arange(1, n_cell+1)
        self.shift = shift
        self.n_cell = n_cell
        self.dim = dim
    
    def trsf_grid(self):
        trsf = (
            torch.log(self.scale * self.grid + self.shift) 
            / torch.log(self.base)
            ).reshape(-1, 1)
        return trsf.expand(self.n_cell, self.dim)
    
    def forward(self, x, std):
        means = self.trsf_grid()
        mix = D.Categorical(self.weight)
        comp = D.Independent(D.Normal(means, std), 1)
        gmm = D.MixtureSameFamily(mix, comp)
        return -gmm.log_prob(x).mean()

if __name__ == "__main__":
    weight = torch.ones(8)
    base = torch.tensor(3.)
    scale = torch.tensor(1.)
    stds = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=False)
    model = GMM(weight, base, scale)
    print(list(model.parameters()))
    
    optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
    for i in range(1000):
        optimizer.zero_grad()
        x = torch.randn(5000,2)
        loss = model(x, stds)
        loss.backward()
        optimizer.step()
        
    print(list(model.parameters()))

就我而言,它返回以下参数:

[Parameter containing:
tensor([1., 1., 1., 1., 1., 1., 1., 1.], requires_grad=True), Parameter containing:
tensor(3., requires_grad=True), Parameter containing:
tensor(1., requires_grad=True)]

[Parameter containing:
tensor([0.7872, 1.1010, 1.3390, 1.3757, 0.5122, 0.2884, 1.2597, 0.7597],
       requires_grad=True), Parameter containing:
tensor(3.3207, requires_grad=True), Parameter containing:
tensor(0.2814, requires_grad=True)]

这确实表明参数正在更新。
您还可以看到下面的计算图:

计算图

According to your instructions I have re-written your model.
If you run it you can see that all the parameters are changing after the model is optimized. I also have provided the graph of the model at the end. You can simply modify the GMM class as you need if you want to make a new one.

import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets as datasets

import torch
from torch import nn
from torch import optim
import torch.distributions as D

class GMM(nn.Module):
    
    def __init__(self, weights, base, scale, n_cell=8, shift=0, dim=2):
        super(GMM, self).__init__()
        self.weight = nn.Parameter(weights)
        self.base = nn.Parameter(base)
        self.scale = nn.Parameter(scale)
        self.grid = torch.arange(1, n_cell+1)
        self.shift = shift
        self.n_cell = n_cell
        self.dim = dim
    
    def trsf_grid(self):
        trsf = (
            torch.log(self.scale * self.grid + self.shift) 
            / torch.log(self.base)
            ).reshape(-1, 1)
        return trsf.expand(self.n_cell, self.dim)
    
    def forward(self, x, std):
        means = self.trsf_grid()
        mix = D.Categorical(self.weight)
        comp = D.Independent(D.Normal(means, std), 1)
        gmm = D.MixtureSameFamily(mix, comp)
        return -gmm.log_prob(x).mean()

if __name__ == "__main__":
    weight = torch.ones(8)
    base = torch.tensor(3.)
    scale = torch.tensor(1.)
    stds = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=False)
    model = GMM(weight, base, scale)
    print(list(model.parameters()))
    
    optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
    for i in range(1000):
        optimizer.zero_grad()
        x = torch.randn(5000,2)
        loss = model(x, stds)
        loss.backward()
        optimizer.step()
        
    print(list(model.parameters()))

In my case It returned the following parameters:

[Parameter containing:
tensor([1., 1., 1., 1., 1., 1., 1., 1.], requires_grad=True), Parameter containing:
tensor(3., requires_grad=True), Parameter containing:
tensor(1., requires_grad=True)]

[Parameter containing:
tensor([0.7872, 1.1010, 1.3390, 1.3757, 0.5122, 0.2884, 1.2597, 0.7597],
       requires_grad=True), Parameter containing:
tensor(3.3207, requires_grad=True), Parameter containing:
tensor(0.2814, requires_grad=True)]

which indeed shows that the parameters are updating.
Also you can see the computation graph below:

The computation grapg

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文