记录实施朴素贝叶斯进行文本分类的可能性

发布于 2024-10-27 18:29:39 字数 2789 浏览 2 评论 0原文

我正在实施朴素贝叶斯算法进行文本分类。我有大约 1000 个用于培训的文档和 400 个用于测试的文档。我认为我已经正确实施了培训部分,但我在测试部分感到困惑。以下是我所做的简要说明:

在我的培训功能中:

vocabularySize= GetUniqueTermsInCollection();//get all unique terms in the entire collection

spamModelArray[vocabularySize]; 
nonspamModelArray[vocabularySize];

for each training_file{
        class = GetClassLabel(); // 0 for spam or 1 = non-spam
        document = GetDocumentID();

        counterTotalTrainingDocs ++;

        if(class == 0){
                counterTotalSpamTrainingDocs++;
        }

        for each term in document{
                freq = GetTermFrequency; // how many times this term appears in this document?
                id = GetTermID; // unique id of the term 

                if(class = 0){ //SPAM
                        spamModelArray[id]+= freq;
                        totalNumberofSpamWords++; // total number of terms marked as spam in the training docs
                }else{ // NON-SPAM
                        nonspamModelArray[id]+= freq;
                        totalNumberofNonSpamWords++; // total number of terms marked as non-spam in the training docs
                }
        }//for


        for i in vocabularySize{
                spamModelArray[i] = spamModelArray[i]/totalNumberofSpamWords;
                nonspamModelArray[i] = nonspamModelArray[i]/totalNumberofNonSpamWords;

        }//for


        priorProb = counterTotalSpamTrainingDocs/counterTotalTrainingDocs;// calculate prior probability of the spam documents
}

我认为我正确理解并实施了培训部分,但我不确定我是否可以正确实施测试部分。在这里,我尝试检查每个测试文档,并计算每个文档的 logP(spam|d) 和 logP(non-spam|d)。然后我比较这两个数量以确定类别(垃圾邮件/非垃圾邮件)。

在我的测试功能中:

vocabularySize= GetUniqueTermsInCollection;//get all unique terms in the entire collection
for each testing_file:
        document = getDocumentID;

        logProbabilityofSpam = 0;
        logProbabilityofNonSpam = 0;

        for each term in document{
                freq = GetTermFrequency; // how many times this term appears in this document?
                id = GetTermID; // unique id of the term 

                // logP(w1w2.. wn) = C(wj)∗logP(wj)
                logProbabilityofSpam+= freq*log(spamModelArray[id]);
                logProbabilityofNonSpam+= freq*log(nonspamModelArray[id]);
        }//for

        // Now I am calculating the probability of being spam for this document
        if (logProbabilityofNonSpam + log(1-priorProb) > logProbabilityofSpam +log(priorProb)) { // argmax[logP(i|ck) + logP(ck)]
                newclass = 1; //not spam
        }else{
                newclass = 0; // spam
        }

}//for

我的问题是;我想返回每个类别的概率,而不是精确的 1 和 0(垃圾邮件/非垃圾邮件)。我想看到例如 newclass = 0.8684212 这样我就可以稍后应用阈值。但我在这里很困惑。如何计算每个文档的概率?我可以使用 logProbability 来计算它吗?

I am implementing Naive Bayes algorithm for text classification. I have ~1000 documents for training and 400 documents for testing. I think I've implemented training part correctly, but I am confused in testing part. Here is what I've done briefly:

In my training function:

vocabularySize= GetUniqueTermsInCollection();//get all unique terms in the entire collection

spamModelArray[vocabularySize]; 
nonspamModelArray[vocabularySize];

for each training_file{
        class = GetClassLabel(); // 0 for spam or 1 = non-spam
        document = GetDocumentID();

        counterTotalTrainingDocs ++;

        if(class == 0){
                counterTotalSpamTrainingDocs++;
        }

        for each term in document{
                freq = GetTermFrequency; // how many times this term appears in this document?
                id = GetTermID; // unique id of the term 

                if(class = 0){ //SPAM
                        spamModelArray[id]+= freq;
                        totalNumberofSpamWords++; // total number of terms marked as spam in the training docs
                }else{ // NON-SPAM
                        nonspamModelArray[id]+= freq;
                        totalNumberofNonSpamWords++; // total number of terms marked as non-spam in the training docs
                }
        }//for


        for i in vocabularySize{
                spamModelArray[i] = spamModelArray[i]/totalNumberofSpamWords;
                nonspamModelArray[i] = nonspamModelArray[i]/totalNumberofNonSpamWords;

        }//for


        priorProb = counterTotalSpamTrainingDocs/counterTotalTrainingDocs;// calculate prior probability of the spam documents
}

I think I understood and implemented training part correctly, but I am not sure I could implemented testing part properly. In here, I am trying to go through each test document and I calculate logP(spam|d) and logP(non-spam|d) for each document. Then I compare these two quantities in order to determine the class (spam/non-spam).

In my testing function:

vocabularySize= GetUniqueTermsInCollection;//get all unique terms in the entire collection
for each testing_file:
        document = getDocumentID;

        logProbabilityofSpam = 0;
        logProbabilityofNonSpam = 0;

        for each term in document{
                freq = GetTermFrequency; // how many times this term appears in this document?
                id = GetTermID; // unique id of the term 

                // logP(w1w2.. wn) = C(wj)∗logP(wj)
                logProbabilityofSpam+= freq*log(spamModelArray[id]);
                logProbabilityofNonSpam+= freq*log(nonspamModelArray[id]);
        }//for

        // Now I am calculating the probability of being spam for this document
        if (logProbabilityofNonSpam + log(1-priorProb) > logProbabilityofSpam +log(priorProb)) { // argmax[logP(i|ck) + logP(ck)]
                newclass = 1; //not spam
        }else{
                newclass = 0; // spam
        }

}//for

My problem is; I want to return the probability of each class instead of exact 1's and 0's (spam/non-spam). I want to see e.g. newclass = 0.8684212 so I can apply threshold later on. But I am confused here. How can I calculate the probability for each document? Can I use logProbabilities to calculate it?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

弥枳 2024-11-03 18:29:39

由属于 C 类的一组特征 {F1, F2, ..., Fn} 描述的数据的概率,根据朴素贝叶斯概率模型,

P(C|F) = P(C) * (P(F1|C) * P(F2|C) * ... * P(Fn|C)) / P(F1, ..., Fn)

您拥有除 1 / P( F1, .. ., Fn) 术语,因为您正在实现的朴素贝叶斯分类器中未使用该术语。 (严格来说,MAP 分类器。)

您还必须收集功能的频率,并从中计算出

P(F1, ..., Fn) = P(F1) * ... * P(Fn)

The probability of the data described by a set of features {F1, F2, ..., Fn} belonging in class C, according to the naïve Bayes probability model, is

P(C|F) = P(C) * (P(F1|C) * P(F2|C) * ... * P(Fn|C)) / P(F1, ..., Fn)

You have all the terms (in logarithmic form), except for the 1 / P( F1, ..., Fn) term since that's not used in the naïve Bayes classifier that you're implementing. (Strictly, the MAP classifier.)

You'd have to collect frequencies of the features as well, and from them calculate

P(F1, ..., Fn) = P(F1) * ... * P(Fn)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文