Ngram IDF 平滑

发布于 2024-09-05 13:04:00 字数 374 浏览 11 评论 0原文

我正在尝试使用 IDF 分数在我相当庞大的文档语料库中查找有趣的短语。
我基本上需要像亚马逊的统计上不可能的短语这样的东西,即将文档与所有其他文档区分开来的短语
我遇到的问题是,我的数据中的一些具有超高 idf 的 (3,4)-gram 实际上由具有非常低 idf 的一元语法和二元语法组成。
例如,“你从未尝试过”具有非常高的 idf,而每个组成一元组的 idf 都非常低..
我需要提出一个函数,该函数可以接收 n 元语法及其所有组成 (nk) 元语法的文档频率,并返回更有意义的度量,以衡量该短语将父文档与其余文档区分开来的程度。
如果我处理概率,我会尝试插值或退避模型。我不确定这些模型利用什么​​假设/直觉来表现良好,以及它们对 IDF 分数的影响如何。
有人有更好的想法吗?

I am trying to use IDF scores to find interesting phrases in my pretty huge corpus of documents.
I basically need something like Amazon's Statistically Improbable Phrases, i.e. phrases that distinguish a document from all the others
The problem that I am running into is that some (3,4)-grams in my data which have super-high idf actually consist of component unigrams and bigrams which have really low idf..
For example, "you've never tried" has a very high idf, while each of the component unigrams have very low idf..
I need to come up with a function that can take in document frequencies of an n-gram and all its component (n-k)-grams and return a more meaningful measure of how much this phrase will distinguish the parent document from the rest.
If I were dealing with probabilities, I would try interpolation or backoff models.. I am not sure what assumptions/intuitions those models leverage to perform well, and so how well they would do for IDF scores.
Anybody has any better ideas?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

墨落成白 2024-09-12 13:04:01

我认为“you've never attempts”是一个你不想提取的短语,但它的 IDF 很高。问题在于,大量 n 元语法只出现在一份文档中,因此 IDF 得分可能最大。

NLP 中有很多平滑技术。本文 [Chen 和 Goodman]是对其中许多内容的很好的总结。特别是,您听起来可能对 Kneser-Ney 平滑算法感兴趣,该算法按照您建议的方式工作(退回到较短长度的 n 元语法)。

这些方法通常用于语言建模任务,即在给定一个非常大的语言语料库的情况下估计 n 元语法出现的概率。我真的不知道如何将它们与 IDF 分数结合起来,或者即使这确实是您想要做的。

I take it that "you've never tried" is a phrase that you don't want to extract, but which has high IDF. The problem will be that there are going to be a vast number of n-grams that only occur in one document and so have the largest possible IDF score.

There are lots of smoothing techniques in NLP. This paper [Chen&Goodman] is a pretty good summary of many of them. In particular, you sound like you might be interested in the Kneser-Ney smoothing algorithm that works in the way you suggest (backing off to lower length n-grams).

These methods are usually used for the task of language modelling, i.e. to estimate the probability of an n-gram occurring given a really big corpus of the language. I don't really know how how you might integrate them with IDF scores, or even if that's really what you want to do.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文