我正在尝试根据RealnVP模型实施归一化流,以进行密度估计。
首先,我试图使它在。
当使用批处理两种规范化的两种规范化的两种规范化> 时,该模型会产生预期的结果。但是,当将batchnoralization族列式添加到模型中时,方法 prob
和 log_prob
返回意外结果。
以下是设置该模型的代码段:
layers = 6
dimensions = 2
hidden_units = [512, 512]
bijectors = []
base_dist = tfd.Normal(loc=0.0, scale=1.0) # specify base distribution
for i in range(layers):
# Adding the BatchNormalization bijector corrupts the results
bijectors.append(tfb.BatchNormalization())
bijectors.append(RealNVP(input_shape=dimensions, n_hidden=hidden_units))
bijectors.append(tfp.bijectors.Permute([1, 0]))
bijector = tfb.Chain(bijectors=list(reversed(bijectors))[:-1], name='chain_of_real_nvp')
flow = tfd.TransformedDistribution(
distribution=tfd.Sample(base_dist, sample_shape=[dimensions]),
bijector=bijector
)
何时批准两种规范化框架,同时进行采样和评估概率回报预期结果:
”构建的热图
,但是,当添加了批次正规化徒时,采样如预期,但评估概率似乎是错误的:
”构建的热图
,因为我对密度估计感兴趣 prob
方法至关重要。完整的代码可以在以下jupyter笔记本中找到:
我知道,在训练和推理过程中,束规范化的两种规范化的行为不同。问题可能是BN徒仍处于训练模式?如果是这样,我该如何将流动转移到推理模式?
I am trying to implement a normalizing flow according to the RealNVP model for density estimation.
First, I am trying to make it work on the "moons" toy dataset.
The model produces the expected result when not using the BatchNormalization bijector. However, when adding the BatchNormalization bijector to the model, the methods prob
and log_prob
return unexpected results.
Following is a code snippet setting up the model:
layers = 6
dimensions = 2
hidden_units = [512, 512]
bijectors = []
base_dist = tfd.Normal(loc=0.0, scale=1.0) # specify base distribution
for i in range(layers):
# Adding the BatchNormalization bijector corrupts the results
bijectors.append(tfb.BatchNormalization())
bijectors.append(RealNVP(input_shape=dimensions, n_hidden=hidden_units))
bijectors.append(tfp.bijectors.Permute([1, 0]))
bijector = tfb.Chain(bijectors=list(reversed(bijectors))[:-1], name='chain_of_real_nvp')
flow = tfd.TransformedDistribution(
distribution=tfd.Sample(base_dist, sample_shape=[dimensions]),
bijector=bijector
)
When to BatchNormalization bijector is ommited both sampling and evaluating the probability return expected results:
However, when the BatchNormalization bijector is added, sampling is as expected but evaluating the probability seems wrong:
Because I am interested in density estimation the prob
method is crucial. The full code can be found in the following jupyter notebook:
https://github.com/mmsbrggr/normalizing-flows/blob/master/moons_training_rnvp.ipynb
I know that the BatchNormalization bijector behaves differently during training and inference. Could the problem be that the BN bijector is still in training mode? If so how can I move the flow to inference mode?
发布评论