NaiveBayesClassifier太多值无法解开错误
我正在尝试建立一个情感分析模型来检查一些新闻文章,并且我有点困惑。我不太确定除了将其制作成词典之外,我还需要做些什么才能构造我的数据集。
我使用的数据集是来自此线程的: https://forum.knime.com/t/mpqa-corpus/mmpqa-corpus/7887/7887/ 2
import nltk
from Noise_Removal import lemmatize_sentence, remove_noise
from Single_Article_Scrape import scrape_news
import pandas as pd
positive_MPQA = pd.read_csv("C:/Users/.../Model_Data/MPQA-OpinionCorpus-NegativeList.csv")
negative_MPQA = pd.read_csv("C:/Users/.../Model_Data/MPQA-OpinionCorpus-PositiveList.csv")
positive_MPQA['Sentiment'] = 'Positive'
negative_MPQA['Sentiment'] = 'Negative'
positive_tokens = positive_MPQA.values.tolist()
negative_tokens = negative_MPQA.values.tolist()
positive_data = dict(positive_tokens)
negative_data = dict(negative_tokens)
dataset = positive_data | negative_data
import random
keys = list(dataset.keys())
random.shuffle(keys)
ShuffledDataset = dict()
for key in keys:
ShuffledDataset.update({key: dataset[key]})
from nltk import classify
from nltk import NaiveBayesClassifier
classifier = NaiveBayesClassifier.train(dataset)
I am trying to build a sentiment analysis model to examine some news articles and I am a bit stumped building my model. I am not quite sure what else I need to do to structure my dataset aside from making it into a dictionary.
The dataset I am using is from this thread:
https://forum.knime.com/t/mpqa-corpus/7887/2
import nltk
from Noise_Removal import lemmatize_sentence, remove_noise
from Single_Article_Scrape import scrape_news
import pandas as pd
positive_MPQA = pd.read_csv("C:/Users/.../Model_Data/MPQA-OpinionCorpus-NegativeList.csv")
negative_MPQA = pd.read_csv("C:/Users/.../Model_Data/MPQA-OpinionCorpus-PositiveList.csv")
positive_MPQA['Sentiment'] = 'Positive'
negative_MPQA['Sentiment'] = 'Negative'
positive_tokens = positive_MPQA.values.tolist()
negative_tokens = negative_MPQA.values.tolist()
positive_data = dict(positive_tokens)
negative_data = dict(negative_tokens)
dataset = positive_data | negative_data
import random
keys = list(dataset.keys())
random.shuffle(keys)
ShuffledDataset = dict()
for key in keys:
ShuffledDataset.update({key: dataset[key]})
from nltk import classify
from nltk import NaiveBayesClassifier
classifier = NaiveBayesClassifier.train(dataset)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
方法
naivebayesclassifier.train()
期望可以迭代的元组列表。当它试图迭代您通过的字典时,它(实际上)与键列表结束。这是称呼它的正确方法:The method
NaiveBayesClassifier.train()
expects a list of tuples that it can iterate over. When it tries to iterate over the dictionary you passed, it ends up (in effect) with the list of keys. This is the correct way to call it: