如何生成与直方图匹配的点?

发布于 2024-07-11 14:08:15 字数 247 浏览 12 评论 0原文

我正在研究一个模拟系统。 我很快就会获得几个模拟输入的真实世界分布值的实验数据(直方图)。

当模拟运行时,我希望能够生成与测量分布相匹配的随机值。 我宁愿在不存储原始直方图的情况下执行此操作。 有哪些好方法

  1. 将直方图映射到一组表示分布的参数
  2. ? 在运行时根据这些参数生成值?

编辑:输入数据是几种不同类型事件的事件持续时间。 我预计不同的类型会有不同的分布函数。

I am working on a simulation system. I will soon have experimental data (histograms) for the real-world distribution of values for several simulation inputs.

When the simulation runs, I would like to be able to produce random values that match the measured distribution. I'd prefer to do this without storing the original histograms. What are some good ways of

  1. Mapping a histogram to a set of parameters representing the distribution?
  2. Generating values that based on those parameters at runtime?

EDIT: The input data are event durations for several different types of events. I expect that different types will have different distribution functions.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

对风讲故事 2024-07-18 14:08:15

至少有两个选项:

  1. 对直方图进行积分并进行数值反转。
  2. 拒绝

数值积分

William R. Gibbs 的现代物理学计算中的

人们总是可以对[函数]进行数值积分并反转[cdf]
但这通常不是很令人满意,特别是当 pdf 发生变化时
迅速。

您实际上构建了一个表,将范围 [0-1) 转换为目标分布中的适当范围。 然后扔出你常用的(高质量)PRNG 并用表格进行翻译。 它很麻烦,但是清晰、可行并且完全通用。

拒绝:

标准化目标直方图,然后

  1. 掷骰子沿范围随机选择一个位置(x)。
  2. 再次抛出,如果新的随机数小于该箱中的标准化直方图,则选择该点。 否则转到(1)。

再说一遍,头脑简单,但思路清晰,工作顺利。 对于概率非常低的情况(具有长尾的峰)来说,分布可能会很慢。


使用这两种方法,如果不需要阶跃函数直方图,您可以使用分段多项式拟合或样条曲线来近似数据以生成平滑曲线,但可以稍后再做过早的优化。


对于特殊情况可能存在更好的方法。

所有这些都是相当标准的,如果需要更多细节,应该出现在任何数值分析教科书中。

At least two options:

  1. Integrate the histogram and invert numerically.
  2. Rejection

Numeric integration

From Computation in Modern Physics by William R. Gibbs:

One can always numerically integrate [the function] and invert the [cdf]
but this is often not very satisfactory especially if the pdf is changing
rapidly.

You literally build up a table that translates the range [0-1) into appropriate ranges in the target distribution. Then throw your usual (high quality) PRNG and translate with the table. It is cumbersome, but clear, workable, and completely general.

Rejection:

Normalize the target histogram, then

  1. Throw the dice to choose a position (x) along the range randomly.
  2. Throw again, and select this point if the new random number is less than the normalized histogram in this bin. Otherwise goto (1).

Again, simple minded but clear and working. It can be slow for distribution with a lot of very low probability (peaks with long tails).


With both of these methods, you can approximate the data with piecewise polynomial fits or splines to generate a smooth curve if a step-function histogram is not desired---but leave that for later as it may be premature optimization.


Better methods may exist for special cases.

All of this is pretty standard and should appear in any Numeric Analysis textbook if I more detail is needed.

著墨染雨君画夕 2024-07-18 14:08:15

有关该问题的更多信息将会很有用。 例如,直方图代表什么类型的值? 它们是分类的(例如颜色、字母)还是连续的(例如高度、时间)?

如果直方图超过分类数据,我认为可能很难对分布进行参数化,除非类别之间存在很多相关性。

如果直方图是连续数据,您可能会尝试使用混合高斯分布来拟合分布。 也就是说,尝试使用 $\sum_{i=1}^n w_i N(m_i,v_i)$ 拟合直方图,其中 m_i 和 v_i 是均值和方差。 然后,当您想要生成数据时,您首先以与权重 w_i 成比例的概率从 1..n 采样 i,然后像从任何高斯分布一样采样 x ~ n(m_i,v_i)。

无论哪种方式,您可能想了解有关混合模型的更多信息。

More information about the problem would be useful. For example, what sort of values are the histograms over? Are they categorical (e.g., colours, letters) or continuous (e.g., heights, time)?

If the histograms are over categorical data I think it may be difficult to parameterise the distributions unless there are many correlations between categories.

If the histograms are over continuous data you might try to fit the distribution using mixtures of Gaussians. That is, try to fit the histogram using a $\sum_{i=1}^n w_i N(m_i,v_i)$ where m_i and v_i are the mean and variance. Then, when you want to generate data you first sample an i from 1..n with probability proportional to the weights w_i and then sample an x ~ n(m_i,v_i) as you would from any Gaussian.

Either way, you may want to read more about mixture models.

永言不败 2024-07-18 14:08:15

所以看来我想要生成给定的概率分布是一个 分位数函数,这是
累积分布函数,如 @dmckee 所说。

问题变成:生成和存储描述给定连续直方图的分位数函数的最佳方法是什么? 我有一种感觉,答案将在很大程度上取决于输入的形状 - 如果它遵循任何类型的模式,则应该对最一般的情况进行简化。 我会随时更新这里。


编辑:

本周我有一次谈话让我想起了这个问题。 如果我放弃将直方图描述为方程,而只存储表格,我可以在 O(1) 时间内进行选择吗? 事实证明,您可以在不损失任何精度的情况下,以 O(N lgN) 构建时间为代价。

创建一个包含 N 个项目的数组。 对数组进行均匀随机选择将找到概率为 1/N 的项目。 对于每个项目,存储实际应该选择该项目的命中分数,以及如果不选择该项目则将选择的另一项目的索引。

加权随机采样,C 实现:

//data structure
typedef struct wrs_data {
  double share; 
  int pair;
  int idx;
} wrs_t;


//sort helper
int wrs_sharecmp(const void* a, const void* b) {
  double delta = ((wrs_t*)a)->share - ((wrs_t*)b)->share;
  return (delta<0) ? -1 : (delta>0);
}


//Initialize the data structure
wrs_t* wrs_create(int* weights, size_t N) {
  wrs_t* data = malloc(sizeof(wrs_t));
  double sum = 0;
  int i;
  for (i=0;i<N;i++) { sum+=weights[i]; }
  for (i=0;i<N;i++) {
    //what percent of the ideal distribution is in this bucket?
    data[i].share = weights[i]/(sum/N); 
    data[i].pair = N;
    data[i].idx = i;
  }
  //sort ascending by size
  qsort(data,N, sizeof(wrs_t),wrs_sharecmp);

  int j=N-1; //the biggest bucket
  for (i=0;i<j;i++) {
    int check = i;
    double excess = 1.0 - data[check].share;
    while (excess>0 && i<j) {
      //If this bucket has less samples than a flat distribution,
      //it will be hit more frequently than it should be.  
      //So send excess hits to a bucket which has too many samples.
      data[check].pair=j; 
      // Account for the fact that the paired bucket will be hit more often,
      data[j].share -= excess;  
      excess = 1.0 - data[j].share;
      // If paired bucket now has excess hits, send to new largest bucket at j-1
      if (excess >= 0) { check=j--;} 
    }
  }
  return data;
}


int wrs_pick(wrs_t* collection, size_t N)
//O(1) weighted random sampling (after preparing the collection).
//Randomly select a bucket, and a percentage.
//If the percentage is greater than that bucket's share of hits, 
// use it's paired bucket.
{
  int idx = rand_in_range(0,N);
  double pct = rand_percent();
  if (pct > collection[idx].share) { idx = collection[idx].pair; }
  return collection[idx].idx;
} 

编辑2:
经过一番研究,我发现甚至可以在 O(N) 时间内完成构建。 通过仔细跟踪,您无需对数组进行排序即可找到大垃圾箱和小垃圾箱。 此处更新了实现

So it seems that what I want in order to generate a given probablity distribution is a Quantile Function, which is the inverse of the
cumulative distribution function, as @dmckee says.

The question becomes: What is the best way to generate and store a quantile function describing a given continuous histogram? I have a feeling the answer will depend greatly on the shape of the input - if it follows any kind of pattern there should be simplifications over the most general case. I'll update here as I go.


Edit:

I had a conversation this week that reminded me of this problem. If I forgo describing the histogram as an equation, and just store the table, can I do selections in O(1) time? It turns out you can, without any loss of precision, at the cost of O(N lgN) construction time.

Create an array of N items. A uniform random selection into the array will find an item with probablilty 1/N. For each item, store the fraction of hits for which this item should actually be selected, and the index of another item which will be selected if this one is not.

Weighted Random Sampling, C implementation:

//data structure
typedef struct wrs_data {
  double share; 
  int pair;
  int idx;
} wrs_t;


//sort helper
int wrs_sharecmp(const void* a, const void* b) {
  double delta = ((wrs_t*)a)->share - ((wrs_t*)b)->share;
  return (delta<0) ? -1 : (delta>0);
}


//Initialize the data structure
wrs_t* wrs_create(int* weights, size_t N) {
  wrs_t* data = malloc(sizeof(wrs_t));
  double sum = 0;
  int i;
  for (i=0;i<N;i++) { sum+=weights[i]; }
  for (i=0;i<N;i++) {
    //what percent of the ideal distribution is in this bucket?
    data[i].share = weights[i]/(sum/N); 
    data[i].pair = N;
    data[i].idx = i;
  }
  //sort ascending by size
  qsort(data,N, sizeof(wrs_t),wrs_sharecmp);

  int j=N-1; //the biggest bucket
  for (i=0;i<j;i++) {
    int check = i;
    double excess = 1.0 - data[check].share;
    while (excess>0 && i<j) {
      //If this bucket has less samples than a flat distribution,
      //it will be hit more frequently than it should be.  
      //So send excess hits to a bucket which has too many samples.
      data[check].pair=j; 
      // Account for the fact that the paired bucket will be hit more often,
      data[j].share -= excess;  
      excess = 1.0 - data[j].share;
      // If paired bucket now has excess hits, send to new largest bucket at j-1
      if (excess >= 0) { check=j--;} 
    }
  }
  return data;
}


int wrs_pick(wrs_t* collection, size_t N)
//O(1) weighted random sampling (after preparing the collection).
//Randomly select a bucket, and a percentage.
//If the percentage is greater than that bucket's share of hits, 
// use it's paired bucket.
{
  int idx = rand_in_range(0,N);
  double pct = rand_percent();
  if (pct > collection[idx].share) { idx = collection[idx].pair; }
  return collection[idx].idx;
} 

Edit 2:
After a little research, I found it's even possible to do the construction in O(N) time. With careful tracking, you don't need to sort the array to find the large and small bins. Updated implementation here

寻梦旅人 2024-07-18 14:08:15

如果您需要提取大量具有离散点加权分布的样本,请查看 类似问题的答案

但是,如果您需要使用直方图来近似某些连续随机函数,那么您最好的选择可能是 dmckee 的数值积分答案。 或者,您可以使用别名,并将点存储在左侧,并在两点之间选择一个统一的数字。

If you need to pull a large number of samples with a weighted distribution of discrete points, then look at an answer to a similar question.

However, if you need to approximate some continuous random function using a histogram, then your best bet is probably dmckee's numeric integration answer. Alternatively, you can use the aliasing, and store the point to the left, and pick a uniform number between the two points.

苄①跕圉湢 2024-07-18 14:08:15

要从直方图(原始或简化)中进行选择,
Walker的别名方法
快速且简单。

To choose from a histogram (original or reduced),
Walker's alias method
is fast and simple.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文