在天真的贝叶斯应用交叉验证

发布于 2025-01-29 16:38:32 字数 2142 浏览 4 评论 0 原文

我的数据集是垃圾邮件和火腿菲律宾消息

我将数据集分为60%的培训,20%的测试和20%的验证数据

将数据分为测试,培训和验证培训

from sklearn.model_selection import train_test_split


data['label'] = (data['label'].replace({'ham'  : 0,
                                         'spam' : 1}))
X_train, X_test, y_train, y_test = train_test_split(data['message'], 
                                                        data['label'], test_size=0.2, random_state=1)
    
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1) # 0.25 x 0.8 = 0.2 
print('Total: {} rows'.format(data.shape[0]))
print('Train: {} rows'.format(X_train.shape[0]))
print(' Test: {} rows'.format(X_test.shape[0]))
print(' Validation: {} rows'.format(X_val.shape[0]))

来自Sklearn的多键型MultinomialnB

from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score
import numpy as np
naive_bayes = MultinomialNB().fit(train_data,
                                  y_train)
predictions = naive_bayes.predict(test_data)

评估该模型

from sklearn.metrics import (accuracy_score, 
                             precision_score,
                             recall_score, 
                             f1_score)
accuracy_score = accuracy_score(y_test,
                                predictions)
precision_score = precision_score(y_test,
                                  predictions)
recall_score = recall_score(y_test,
                            predictions)
f1_score = f1_score(y_test,
                    predictions)

我的问题是在验证中。错误说

warnings.warn("Estimator fit failed. The score on this train-test"

这是我编写验证的方式,不知道我是否在做正确的事情”

 from sklearn.model_selection import cross_val_score
    
    mnb = MultinomialNB()
    scores = cross_val_score(mnb,X_val,y_val, cv = 10, scoring='accuracy')
    
    print('Cross-validation scores:{}'.format(scores))

My dataset is Spam and Ham Filipino Message
enter image description here

I divided my dataset into 60% training, 20% testing and 20%validation

Split data into testing, training and Validation

from sklearn.model_selection import train_test_split


data['label'] = (data['label'].replace({'ham'  : 0,
                                         'spam' : 1}))
X_train, X_test, y_train, y_test = train_test_split(data['message'], 
                                                        data['label'], test_size=0.2, random_state=1)
    
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1) # 0.25 x 0.8 = 0.2 
print('Total: {} rows'.format(data.shape[0]))
print('Train: {} rows'.format(X_train.shape[0]))
print(' Test: {} rows'.format(X_test.shape[0]))
print(' Validation: {} rows'.format(X_val.shape[0]))

Train a MultinomialNB from sklearn

from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score
import numpy as np
naive_bayes = MultinomialNB().fit(train_data,
                                  y_train)
predictions = naive_bayes.predict(test_data)

Evaluate the Model

from sklearn.metrics import (accuracy_score, 
                             precision_score,
                             recall_score, 
                             f1_score)
accuracy_score = accuracy_score(y_test,
                                predictions)
precision_score = precision_score(y_test,
                                  predictions)
recall_score = recall_score(y_test,
                            predictions)
f1_score = f1_score(y_test,
                    predictions)

My problem is in Validation. The error says

warnings.warn("Estimator fit failed. The score on this train-test"

this is how I code my validation, don't know if I'm doing the right thing"

 from sklearn.model_selection import cross_val_score
    
    mnb = MultinomialNB()
    scores = cross_val_score(mnb,X_val,y_val, cv = 10, scoring='accuracy')
    
    print('Cross-validation scores:{}'.format(scores))

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

奶茶白久 2025-02-05 16:38:34

首先,值得注意的是,因为它被称为交叉验证并不意味着您必须像在代码中一样使用验证集进行crossVal。您将执行交叉验证的原因有很多,其中包括:

  • 确保所有数据集都用于培训中,并评估模型的性能
  • 以执行超参数调整。

因此,您在这里倾向于第一种用例。因此,您无需首先执行 train,Val和Test 的拆分。相反,您可以在整个数据集中执行10倍的交叉验证。

如果您要进行降压,那么您可以持有一组30%,并使用剩余的70%进行交叉验证。一旦确定了最佳参数,您就可以使用Hold-Out集来执行具有最佳参数的模型评估。

一些参考:

> https://towardsdatascience.com/5-reasons-why-you-should-should-should-ish-cross-validation-in-your-data-science-project-8163311a1e79 .analyticsvidhya.com/blog/2021/11/top-7-cross-validation-techniques-with-python-code/" rel="nofollow noreferrer">https://www.analyticsvidhya.com/blog/2021/11 /top-7-cross-validation-techniques-with-python-code/

https://towardsdatascience.com/train-test-split-split-and-cross-validation-in-python-80b61beca4b6

First, it is worth noting that because it's called cross validation doesn't mean you have to use a validation set as you have done in your code, to do the crossval. There are a number of reasons why you would perform cross validation which include:

  • Ensuring that all your dataset is used in training as well as evaluating the performance of your model
  • To perform hyperparameter tuning.

Hence, your case here lean toward the first use case. As such you don't need to first perform a split of train, val, and test. Instead you can perform the 10-fold cross validation on your entire dataset.

If you are doing hyparameterization, then you can have a hold-out set of say 30% and use the remaining 70% for cross validation. Once the best parameters have been determined, you can then use the hold-out set to perform an evaluation of the model with the best parameters.

Some refs:

https://towardsdatascience.com/5-reasons-why-you-should-use-cross-validation-in-your-data-science-project-8163311a1e79

https://www.analyticsvidhya.com/blog/2021/11/top-7-cross-validation-techniques-with-python-code/

https://towardsdatascience.com/train-test-split-and-cross-validation-in-python-80b61beca4b6

盛夏已如深秋| 2025-02-05 16:38:33

我没有任何错误或警告。也许可以工作。

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score
import numpy as np
from sklearn.metrics import (accuracy_score, 
                             precision_score,
                             recall_score, 
                             f1_score)
from sklearn.model_selection import cross_val_score
from sklearn.feature_extraction.text import CountVectorizer

df = pd.read_csv("https://raw.githubusercontent.com/jeffprosise/Machine-Learning/master/Data/ham-spam.csv")

vectorizer = CountVectorizer(ngram_range=(1, 2), stop_words='english')
x = vectorizer.fit_transform(df['Text'])
y = df['IsSpam']

X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1)    
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1) # 0.25 x 0.8 = 0.2 

print('Total: {} rows'.format(data.shape[0]))
print('Train: {} rows'.format(X_train.shape[0]))
print(' Test: {} rows'.format(X_test.shape[0]))
print(' Validation: {} rows'.format(X_val.shape[0]))

naive_bayes = MultinomialNB().fit(X_train, y_train)
predictions = naive_bayes.predict(X_test)

accuracy_score = accuracy_score(y_test,predictions)
precision_score = precision_score(y_test, predictions)
recall_score = recall_score(y_test, predictions)
f1_score = f1_score(y_test, predictions)

mnb = MultinomialNB()
scores = cross_val_score(mnb,X_val,y_val, cv = 10, scoring='accuracy')
print('Cross-validation scores:{}'.format(scores))

结果:

Total: 1000 rows
Train: 600 rows
 Test: 200 rows
 Validation: 200 rows
Cross-validation scores:[1.   0.95 0.85 1.   1.   0.9  0.9  0.8  0.9  0.9 ]

I did not get any error or warning. Maybe it can be worked.

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score
import numpy as np
from sklearn.metrics import (accuracy_score, 
                             precision_score,
                             recall_score, 
                             f1_score)
from sklearn.model_selection import cross_val_score
from sklearn.feature_extraction.text import CountVectorizer

df = pd.read_csv("https://raw.githubusercontent.com/jeffprosise/Machine-Learning/master/Data/ham-spam.csv")

vectorizer = CountVectorizer(ngram_range=(1, 2), stop_words='english')
x = vectorizer.fit_transform(df['Text'])
y = df['IsSpam']

X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1)    
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1) # 0.25 x 0.8 = 0.2 

print('Total: {} rows'.format(data.shape[0]))
print('Train: {} rows'.format(X_train.shape[0]))
print(' Test: {} rows'.format(X_test.shape[0]))
print(' Validation: {} rows'.format(X_val.shape[0]))

naive_bayes = MultinomialNB().fit(X_train, y_train)
predictions = naive_bayes.predict(X_test)

accuracy_score = accuracy_score(y_test,predictions)
precision_score = precision_score(y_test, predictions)
recall_score = recall_score(y_test, predictions)
f1_score = f1_score(y_test, predictions)

mnb = MultinomialNB()
scores = cross_val_score(mnb,X_val,y_val, cv = 10, scoring='accuracy')
print('Cross-validation scores:{}'.format(scores))

Result:

Total: 1000 rows
Train: 600 rows
 Test: 200 rows
 Validation: 200 rows
Cross-validation scores:[1.   0.95 0.85 1.   1.   0.9  0.9  0.8  0.9  0.9 ]
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文