从文本描述中简单过滤掉常用词

发布于 2024-10-11 19:53:34 字数 177 浏览 10 评论 0原文

像“a”、“the”、“best”、“kind”这样的词。我很确定有很好的方法可以实现这一点

只是要明确的是,我正在寻找

  1. 可以实现的最简单的解决方案,最好是在 ruby​​ 中。
  2. 我对错误有很高的容忍度
  3. 如果我需要一个常用短语库,我对此也非常满意

Words like "a", "the", "best", "kind". i am pretty sure there are good ways of achieving this

Just to be clear, I am looking for

  1. The simplest solution that can be implemented, preferably in ruby.
  2. I have a high level of tolerance for errors
  3. If a library of common phrases is what i need, perfectly happy with that too

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

遗心遗梦遗幸福 2024-10-18 19:53:34

这些常见单词被称为“停用词” - 这里有一个类似的 stackoverflow 问题:" 单词

总结一下:

  • 如果您有大量文本需要处理,那么有必要收集有关该特定数据集中单词频率的统计数据,并获取最常见的 你的停用词列表。 (您在示例中包含“kind”,这表明您可能有一组非常不寻常的数据,例如有很多像“kind of”这样的口语表达,所以也许您需要这样做。)
  • 既然您说您不太介意错误,那么只使用其他人生成的英语停用词列表就足够了,例如 MySQL 使用的相当长的内容

如果您只是将这些单词放入程序中的哈希中,那么过滤任何单词列表应该很容易。

These common words are known as "stop words" - there is a similar stackoverflow question about this here: "Stop words" list for English?

To summarize:

  • If you have a large amount of text to deal with, it would be worth gathering statistics about the frequency of words in that particular data set, and taking the most frequent words for your stop word list. (That you include "kind" in your examples suggests to me that you might have quite an unusual set of data, e.g. with lots of colloquial expressions like "kind of", so perhaps you would need to do this.)
  • Since you say you don't mind much about errors, then it may be sufficient to just use a list of stop words for English that someone else has produced, e.g. the fairly long one used by MySQL or anything else that Google turns up.

If you just put these words into a hash in your program it should be easy to filter any list of words.

蓝梦月影 2024-10-18 19:53:34

这是 DigitalRoss 答案的变体。

str=<<EOF
To be, or not to be: that is the question: 
  Whether 'tis nobler in the mind to suffer
  The slings and arrows of outrageous fortune,
  Or to take arms against a sea of troubles,
  And by opposing end them? To die: to sleep;
  No more; and by a sleep to say we end
  The heart-ache and the thousand natural shocks
  That flesh is heir to, 'tis a consummation
  Devoutly to be wish'd. To die, to sleep;
  To sleep: perchance to dream: ay, there's the rub;
  For in that sleep of death what dreams may come
EOF

common = {}
%w{ a and or to the is in be }.each{|w| common[w] = true}
puts str.gsub(/\b\w+\b/){|word| common[word.downcase] ? '': word}.squeeze(' ')

还相关:
检查一个字符串中的单词是否在另一个字符串中的最快方法是什么?

This is a variation on DigitalRoss answer.

str=<<EOF
To be, or not to be: that is the question: 
  Whether 'tis nobler in the mind to suffer
  The slings and arrows of outrageous fortune,
  Or to take arms against a sea of troubles,
  And by opposing end them? To die: to sleep;
  No more; and by a sleep to say we end
  The heart-ache and the thousand natural shocks
  That flesh is heir to, 'tis a consummation
  Devoutly to be wish'd. To die, to sleep;
  To sleep: perchance to dream: ay, there's the rub;
  For in that sleep of death what dreams may come
EOF

common = {}
%w{ a and or to the is in be }.each{|w| common[w] = true}
puts str.gsub(/\b\w+\b/){|word| common[word.downcase] ? '': word}.squeeze(' ')

Also relevant:
What's the fastest way to check if a word from one string is in another string?

用心笑 2024-10-18 19:53:34
  Common = %w{ a and or to the is in be }
Uncommon = %{
  To be, or not to be: that is the question: 
  Whether 'tis nobler in the mind to suffer
  The slings and arrows of outrageous fortune,
  Or to take arms against a sea of troubles,
  And by opposing end them? To die: to sleep;
  No more; and by a sleep to say we end
  The heart-ache and the thousand natural shocks
  That flesh is heir to, 'tis a consummation
  Devoutly to be wish'd. To die, to sleep;
  To sleep: perchance to dream: ay, there's the rub;
  For in that sleep of death what dreams may come
}.split /\b/
ignore_me, result = {}, []
  Common.each { |w| ignore_me[w.downcase] = :Common          }
Uncommon.each { |w| result << w unless ignore_me[w.downcase[/\w*/]] }
puts result.join


 ,  not  : that   question: 
Whether 'tis nobler   mind  suffer
 slings  arrows of outrageous fortune,
  take arms against  sea of troubles,
 by opposing end them?  die:  sleep;
No more;  by  sleep  say we end
 heart-ache   thousand natural shocks
That flesh  heir , 'tis  consummation
Devoutly   wish'd.  die,  sleep;
 sleep: perchance  dream: ay, there's  rub;
For  that sleep of death what dreams may come
  Common = %w{ a and or to the is in be }
Uncommon = %{
  To be, or not to be: that is the question: 
  Whether 'tis nobler in the mind to suffer
  The slings and arrows of outrageous fortune,
  Or to take arms against a sea of troubles,
  And by opposing end them? To die: to sleep;
  No more; and by a sleep to say we end
  The heart-ache and the thousand natural shocks
  That flesh is heir to, 'tis a consummation
  Devoutly to be wish'd. To die, to sleep;
  To sleep: perchance to dream: ay, there's the rub;
  For in that sleep of death what dreams may come
}.split /\b/
ignore_me, result = {}, []
  Common.each { |w| ignore_me[w.downcase] = :Common          }
Uncommon.each { |w| result << w unless ignore_me[w.downcase[/\w*/]] }
puts result.join


 ,  not  : that   question: 
Whether 'tis nobler   mind  suffer
 slings  arrows of outrageous fortune,
  take arms against  sea of troubles,
 by opposing end them?  die:  sleep;
No more;  by  sleep  say we end
 heart-ache   thousand natural shocks
That flesh  heir , 'tis  consummation
Devoutly   wish'd.  die,  sleep;
 sleep: perchance  dream: ay, there's  rub;
For  that sleep of death what dreams may come
年少掌心 2024-10-18 19:53:34

等等,在删除停用词(又名干扰词、垃圾词)之前,您需要做一些研究。索引大小和处理资源并不是唯一的问题。很大程度上取决于最终用户是否会输入查询,或者您是否会处理长时间的自动查询。

所有搜索日志分析表明,人们倾向于在每个查询中输入一到三个单词。当搜索必须处理这些时,我们就不能失去任何东西。例如,一个集合可能在许多文档中包含“版权”一词,这使得它非常常见,但如果索引中没有单词,则无法进行精确的短语搜索或邻近相关性排名。此外,搜索最常见的单词也有完全合理的理由:人们可能正在寻找“The Who”,或更糟糕的是“The The”。

因此,虽然需要考虑技术问题,并且删除停用词是一种解决方案,但它可能不是您要解决的整体问题的正确解决方案。

Hold on, you need to do some research before you take out stopwords (aka noise words, junk words). Index size and processing resources aren't the only issues. A lot depends on whether end-users will be typing queries, or you will be working with long automated queries.

All search log analysis shows that people tend to type one to three words per query. When that's all a search has to work with, we can't afford to lose anything. For example, a collection might have the word "copyright" on many documents -- making it very common -- but if there's no word in the index, it's impossible to do exact phrase searches or proximity relevance ranking. In addition, there are perfectly legitimate reasons to search for the most common words: people may be looking for "The Who", or worse, "The The".

So while there are technical issues to consider, and taking out stopwords is one solution, it may not be the right solution for the overall problem that you are trying to solve.

(り薆情海 2024-10-18 19:53:34

如果您有一个要删除的名为 stop_words 的单词数组,那么您可以从此表达式得到结果:

description.scan(/\w+/).reject do |word|
  stop_words.include? word
end.join ' '

如果您想保留每个单词之间的非单词字符,

description.scan(/(\w+)(\W+)/).reject do |(word, other)|
  stop_words.include? word
end.flatten.join

If you have an array of words to remove named stop_words, then you get the result from this expression:

description.scan(/\w+/).reject do |word|
  stop_words.include? word
end.join ' '

If you want to preserve the non-word characters between each word,

description.scan(/(\w+)(\W+)/).reject do |(word, other)|
  stop_words.include? word
end.flatten.join
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文