发现 2 &使用 R TM 包的 3 个单词短语

发布于 2024-12-27 12:08:01 字数 135 浏览 1 评论 0原文

我正在尝试找到一个代码,该代码实际上可以在 R 文本挖掘包中查找最常用的两个和三个单词短语(也许还有另一个我不知道的包)。我一直在尝试使用标记器,但似乎没有运气。

如果您过去处理过类似的情况,您可以发布经过测试且实际有效的代码吗?太感谢了!

I am trying to find a code that actually works to find the most frequently used two and three word phrases in R text mining package (maybe there is another package for it that I do not know). I have been trying to use the tokenizer, but seem to have no luck.

If you worked on a similar situation in the past, could you post a code that is tested and actually works? Thank you so much!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

坦然微笑 2025-01-03 12:08:01

您可以将自定义标记化函数传递给 tmDocumentTermMatrix 函数,因此如果您安装了包 tau,那么这将相当简单。

library(tm); library(tau);

tokenize_ngrams <- function(x, n=3) return(rownames(as.data.frame(unclass(textcnt(x,method="string",n=n)))))

texts <- c("This is the first document.", "This is the second file.", "This is the third text.")
corpus <- Corpus(VectorSource(texts))
matrix <- DocumentTermMatrix(corpus,control=list(tokenize=tokenize_ngrams))

其中,tokenize_ngrams 函数中的 n 是每个短语的单词数。此功能也在 RTextTools 包中实现,这进一步简化了事情。

library(RTextTools)
texts <- c("This is the first document.", "This is the second file.", "This is the third text.")
matrix <- create_matrix(texts,ngramLength=3)

这将返回一个 DocumentTermMatrix 类,以便与包 tm 一起使用。

You can pass in a custom tokenizing function to tm's DocumentTermMatrix function, so if you have package tau installed it's fairly straightforward.

library(tm); library(tau);

tokenize_ngrams <- function(x, n=3) return(rownames(as.data.frame(unclass(textcnt(x,method="string",n=n)))))

texts <- c("This is the first document.", "This is the second file.", "This is the third text.")
corpus <- Corpus(VectorSource(texts))
matrix <- DocumentTermMatrix(corpus,control=list(tokenize=tokenize_ngrams))

Where n in the tokenize_ngrams function is the number of words per phrase. This feature is also implemented in package RTextTools, which further simplifies things.

library(RTextTools)
texts <- c("This is the first document.", "This is the second file.", "This is the third text.")
matrix <- create_matrix(texts,ngramLength=3)

This returns a class of DocumentTermMatrix for use with package tm.

烟火散人牵绊 2025-01-03 12:08:01

这是 包:

5。我可以在术语文档矩阵中使用二元组而不是单个标记吗?

是的。 RWeka 为任意 n 元语法提供了一个分词器,可以是
直接传递给术语文档矩阵构造函数。例如:

  library("RWeka")
  library("tm")

  data("crude")

  BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 2, max = 2))
  tdm <- TermDocumentMatrix(crude, control = list(tokenize = BigramTokenizer))

  inspect(tdm[340:345,1:10])

This is part 5 of the FAQ of the package:

5. Can I use bigrams instead of single tokens in a term-document matrix?

Yes. RWeka provides a tokenizer for arbitrary n-grams which can be
directly passed on to the term-document matrix constructor. E.g.:

  library("RWeka")
  library("tm")

  data("crude")

  BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 2, max = 2))
  tdm <- TermDocumentMatrix(crude, control = list(tokenize = BigramTokenizer))

  inspect(tdm[340:345,1:10])
束缚m 2025-01-03 12:08:01

这是我自己为不同目的而创作的作品,但我认为也可能适用于您的需求:

#User Defined Functions
Trim <- function (x) gsub("^\\s+|\\s+$", "", x)

breaker <- function(x) unlist(strsplit(x, "[[:space:]]|(?=[.!?*-])", perl=TRUE))

strip <- function(x, digit.remove = TRUE, apostrophe.remove = FALSE){
    strp <- function(x, digit.remove, apostrophe.remove){
        x2 <- Trim(tolower(gsub(".*?($|'|[^[:punct:]]).*?", "\\1", as.character(x))))
        x2 <- if(apostrophe.remove) gsub("'", "", x2) else x2
        ifelse(digit.remove==TRUE, gsub("[[:digit:]]", "", x2), x2)
    }
unlist(lapply(x, function(x) Trim(strp(x =x, digit.remove = digit.remove, 
    apostrophe.remove = apostrophe.remove)) ))
}

unblanker <- function(x)subset(x, nchar(x)>0)

#Fake Text Data
x <- "I like green eggs and ham.  They are delicious.  They taste so yummy.  I'm talking about ham and eggs of course"

#The code using Base R to Do what you want
breaker(x)
strip(x)
words <- unblanker(breaker(strip(x)))
textDF <- as.data.frame(table(words))
textDF$characters <- sapply(as.character(textDF$words), nchar)
textDF2 <- textDF[order(-textDF$characters, textDF$Freq), ]
rownames(textDF2) <- 1:nrow(textDF2)
textDF2
subset(textDF2, characters%in%2:3)

This is my own made up creation for different purposes but I think may applicable to your needs too:

#User Defined Functions
Trim <- function (x) gsub("^\\s+|\\s+$", "", x)

breaker <- function(x) unlist(strsplit(x, "[[:space:]]|(?=[.!?*-])", perl=TRUE))

strip <- function(x, digit.remove = TRUE, apostrophe.remove = FALSE){
    strp <- function(x, digit.remove, apostrophe.remove){
        x2 <- Trim(tolower(gsub(".*?($|'|[^[:punct:]]).*?", "\\1", as.character(x))))
        x2 <- if(apostrophe.remove) gsub("'", "", x2) else x2
        ifelse(digit.remove==TRUE, gsub("[[:digit:]]", "", x2), x2)
    }
unlist(lapply(x, function(x) Trim(strp(x =x, digit.remove = digit.remove, 
    apostrophe.remove = apostrophe.remove)) ))
}

unblanker <- function(x)subset(x, nchar(x)>0)

#Fake Text Data
x <- "I like green eggs and ham.  They are delicious.  They taste so yummy.  I'm talking about ham and eggs of course"

#The code using Base R to Do what you want
breaker(x)
strip(x)
words <- unblanker(breaker(strip(x)))
textDF <- as.data.frame(table(words))
textDF$characters <- sapply(as.character(textDF$words), nchar)
textDF2 <- textDF[order(-textDF$characters, textDF$Freq), ]
rownames(textDF2) <- 1:nrow(textDF2)
textDF2
subset(textDF2, characters%in%2:3)
毁我热情 2025-01-03 12:08:01

corpus 库有一个名为 term_stats 的函数,可以执行您想要的操作:

library(corpus)
corpus <- gutenberg_corpus(55) # Project Gutenberg #55, _The Wizard of Oz_
text_filter(corpus)$drop_punct <- TRUE # ignore punctuation
term_stats(corpus, ngrams = 2:3)
##    term             count support
## 1  of the             336       1
## 2  the scarecrow      208       1
## 3  to the             185       1
## 4  and the            166       1
## 5  said the           152       1
## 6  in the             147       1
## 7  the lion           141       1
## 8  the tin            123       1
## 9  the tin woodman    114       1
## 10 tin woodman        114       1
## 11 i am                84       1
## 12 it was              69       1
## 13 in a                64       1
## 14 the great           63       1
## 15 the wicked          61       1
## 16 wicked witch        60       1
## 17 at the              59       1
## 18 the little          59       1
## 19 the wicked witch    58       1
## 20 back to             57       1
## ⋮  (52511 rows total)

这里,count 是出现次数,support< /code> 是包含该术语的文档数。

The corpus library has a function called term_stats that does what you want:

library(corpus)
corpus <- gutenberg_corpus(55) # Project Gutenberg #55, _The Wizard of Oz_
text_filter(corpus)$drop_punct <- TRUE # ignore punctuation
term_stats(corpus, ngrams = 2:3)
##    term             count support
## 1  of the             336       1
## 2  the scarecrow      208       1
## 3  to the             185       1
## 4  and the            166       1
## 5  said the           152       1
## 6  in the             147       1
## 7  the lion           141       1
## 8  the tin            123       1
## 9  the tin woodman    114       1
## 10 tin woodman        114       1
## 11 i am                84       1
## 12 it was              69       1
## 13 in a                64       1
## 14 the great           63       1
## 15 the wicked          61       1
## 16 wicked witch        60       1
## 17 at the              59       1
## 18 the little          59       1
## 19 the wicked witch    58       1
## 20 back to             57       1
## ⋮  (52511 rows total)

Here, count is the number of appearances, and support is the number of documents containing the term.

瘫痪情歌 2025-01-03 12:08:01

我通过使用 tm 和 ngram 包添加了类似的问题。
调试 mclapply 后,我发现少于 2 个单词的文档存在问题,并出现以下错误

   input 'x' has nwords=1 and n=2; must have nwords >= n

因此我添加了一个过滤器来删除字数较少的文档:

    myCorpus.3 <- tm_filter(myCorpus.2, function (x) {
      length(unlist(strsplit(stringr::str_trim(x$content), '[[:blank:]]+'))) > 1
    })

然后我的 tokenize 函数如下所示:

bigramTokenizer <- function(x) {
  x <- as.character(x)

  # Find words
  one.list <- c()
  tryCatch({
    one.gram <- ngram::ngram(x, n = 1)
    one.list <- ngram::get.ngrams(one.gram)
  }, 
  error = function(cond) { warning(cond) })

  # Find 2-grams
  two.list <- c()
  tryCatch({
    two.gram <- ngram::ngram(x, n = 2)
    two.list <- ngram::get.ngrams(two.gram)
  },
  error = function(cond) { warning(cond) })

  res <- unlist(c(one.list, two.list))
  res[res != '']
}

然后您可以使用以下命令测试该功能:

dtmTest <- lapply(myCorpus.3, bigramTokenizer)

最后:

dtm <- DocumentTermMatrix(myCorpus.3, control = list(tokenize = bigramTokenizer))

I add a similar problem by using tm and ngram packages.
After debugging mclapply, I saw there where problems on documents with less than 2 words with the following error

   input 'x' has nwords=1 and n=2; must have nwords >= n

So I've added a filter to remove document with low word count number:

    myCorpus.3 <- tm_filter(myCorpus.2, function (x) {
      length(unlist(strsplit(stringr::str_trim(x$content), '[[:blank:]]+'))) > 1
    })

Then my tokenize function looks like:

bigramTokenizer <- function(x) {
  x <- as.character(x)

  # Find words
  one.list <- c()
  tryCatch({
    one.gram <- ngram::ngram(x, n = 1)
    one.list <- ngram::get.ngrams(one.gram)
  }, 
  error = function(cond) { warning(cond) })

  # Find 2-grams
  two.list <- c()
  tryCatch({
    two.gram <- ngram::ngram(x, n = 2)
    two.list <- ngram::get.ngrams(two.gram)
  },
  error = function(cond) { warning(cond) })

  res <- unlist(c(one.list, two.list))
  res[res != '']
}

Then you can test the function with:

dtmTest <- lapply(myCorpus.3, bigramTokenizer)

And finally:

dtm <- DocumentTermMatrix(myCorpus.3, control = list(tokenize = bigramTokenizer))
会傲 2025-01-03 12:08:01

尝试 tidytext 包

library(dplyr)
library(tidytext)
library(janeaustenr)
library(tidyr

假设我有一个包含评论列的数据框 CommentData,并且我想查找两个单词一起出现的情况。然后尝试

bigram_filtered <- CommentData %>%
  unnest_tokens(bigram, Comment, token= "ngrams", n=2) %>%
  separate(bigram, c("word1","word2"), sep=" ") %>%
  filter(!word1 %in% stop_words$word,
         !word2 %in% stop_words$word) %>%
  count(word1, word2, sort=TRUE)

上面的代码创建标记,然后删除对分析没有帮助的停止词(例如,the,an,to等),然后计算这些词的出现次数。然后,您将使用联合功能来组合单个单词并记录它们的出现。

bigrams_united <- bigram_filtered %>%
  unite(bigram, word1, word2, sep=" ")
bigrams_united

Try tidytext package

library(dplyr)
library(tidytext)
library(janeaustenr)
library(tidyr

)

Suppose I have a dataframe CommentData that contains comment column and I want to find occurrence of two words together. Then try

bigram_filtered <- CommentData %>%
  unnest_tokens(bigram, Comment, token= "ngrams", n=2) %>%
  separate(bigram, c("word1","word2"), sep=" ") %>%
  filter(!word1 %in% stop_words$word,
         !word2 %in% stop_words$word) %>%
  count(word1, word2, sort=TRUE)

The above code creates tokens, and then remove stop words that doesn't help in analysis(eg. the,an,to etc.) Then you count occurrence of these words. You will be then using unite function to combine individual words and record their occurrence.

bigrams_united <- bigram_filtered %>%
  unite(bigram, word1, word2, sep=" ")
bigrams_united
烂柯人 2025-01-03 12:08:01

试试这个代码。

library(tm)
library(SnowballC)
library(class)
library(wordcloud)

keywords <- read.csv(file.choose(), header = TRUE, na.strings=c("NA","-","?"))
keywords_doc <- Corpus(VectorSource(keywords$"use your column that you need"))
keywords_doc <- tm_map(keywords_doc, removeNumbers)
keywords_doc <- tm_map(keywords_doc, tolower)
keywords_doc <- tm_map(keywords_doc, stripWhitespace)
keywords_doc <- tm_map(keywords_doc, removePunctuation)
keywords_doc <- tm_map(keywords_doc, PlainTextDocument)
keywords_doc <- tm_map(keywords_doc, stemDocument)

这是您可以使用的二元组或三元组部分

BigramTokenizer <-  function(x)
unlist(lapply(ngrams(words(x), 2), paste, collapse = " "), use.names = FALSE)
# creating of document matrix
keywords_matrix <- TermDocumentMatrix(keywords_doc, control = list(tokenize = BigramTokenizer))

# remove sparse terms 
keywords_naremoval <- removeSparseTerms(keywords_matrix, 0.95)

# Frequency of the words appearing
keyword.freq <- rowSums(as.matrix(keywords_naremoval))
subsetkeyword.freq <-subset(keyword.freq, keyword.freq >=20)
frequentKeywordSubsetDF <- data.frame(term = names(subsetkeyword.freq), freq = subsetkeyword.freq) 

# Sorting of the words
frequentKeywordDF <- data.frame(term = names(keyword.freq), freq = keyword.freq)
frequentKeywordSubsetDF <- frequentKeywordSubsetDF[with(frequentKeywordSubsetDF, order(-frequentKeywordSubsetDF$freq)), ]
frequentKeywordDF <- frequentKeywordDF[with(frequentKeywordDF, order(-frequentKeywordDF$freq)), ]

# Printing of the words
wordcloud(frequentKeywordDF$term, freq=frequentKeywordDF$freq, random.order = FALSE, rot.per=0.35, scale=c(5,0.5), min.freq = 30, colors = brewer.pal(8,"Dark2"))

希望这会有所帮助。这是您可以使用的完整代码。

Try this code.

library(tm)
library(SnowballC)
library(class)
library(wordcloud)

keywords <- read.csv(file.choose(), header = TRUE, na.strings=c("NA","-","?"))
keywords_doc <- Corpus(VectorSource(keywords$"use your column that you need"))
keywords_doc <- tm_map(keywords_doc, removeNumbers)
keywords_doc <- tm_map(keywords_doc, tolower)
keywords_doc <- tm_map(keywords_doc, stripWhitespace)
keywords_doc <- tm_map(keywords_doc, removePunctuation)
keywords_doc <- tm_map(keywords_doc, PlainTextDocument)
keywords_doc <- tm_map(keywords_doc, stemDocument)

This is the bigrams or tri grams section that you could use

BigramTokenizer <-  function(x)
unlist(lapply(ngrams(words(x), 2), paste, collapse = " "), use.names = FALSE)
# creating of document matrix
keywords_matrix <- TermDocumentMatrix(keywords_doc, control = list(tokenize = BigramTokenizer))

# remove sparse terms 
keywords_naremoval <- removeSparseTerms(keywords_matrix, 0.95)

# Frequency of the words appearing
keyword.freq <- rowSums(as.matrix(keywords_naremoval))
subsetkeyword.freq <-subset(keyword.freq, keyword.freq >=20)
frequentKeywordSubsetDF <- data.frame(term = names(subsetkeyword.freq), freq = subsetkeyword.freq) 

# Sorting of the words
frequentKeywordDF <- data.frame(term = names(keyword.freq), freq = keyword.freq)
frequentKeywordSubsetDF <- frequentKeywordSubsetDF[with(frequentKeywordSubsetDF, order(-frequentKeywordSubsetDF$freq)), ]
frequentKeywordDF <- frequentKeywordDF[with(frequentKeywordDF, order(-frequentKeywordDF$freq)), ]

# Printing of the words
wordcloud(frequentKeywordDF$term, freq=frequentKeywordDF$freq, random.order = FALSE, rot.per=0.35, scale=c(5,0.5), min.freq = 30, colors = brewer.pal(8,"Dark2"))

Hope this helps. This is an entire code that you could use.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文