我试图制作一个python脚本,该脚本从txt文件中删除不必要的单词和标点符号。但是,对于分级机还不够好吗?
我正在尝试制作一个单词云。我需要剥离一个无趣单词和标点符号的TXT文件。分级器只是没有给我任何反馈。我认为我的脚本消除了一些额外的单词,我不知道为什么。有人可以将我指向正确的方向吗?
punctuations = '''!()-[]{};:'"\,<>./?@#$%^&*_~'''
uninteresting_words = ["the", "a", "to", "if", "is", "it", "of", "and", "or", "an", "as", "i", "me", "my", \
"we", "our", "ours", "you", "your", "yours", "he", "she", "him", "his", "her", "hers", "its", "they", "them", \
"their", "what", "which", "who", "whom", "this", "that", "am", "are", "was", "were", "be", "been", "being", \
"have", "has", "had", "do", "does", "did", "but", "at", "by", "with", "from", "here", "when", "where", "how", \
"all", "any", "both", "each", "few", "more", "some", "such", "no", "nor", "too", "very", "can", "will", "just"]
def count(file_contents):
frequencies = {}
word_list = file_contents.split()
final_list = []
#remove all uninteresting words
for word in word_list:
new_word = ""
for character in word:
if character not in punctuations and character.isalpha():
new_word += character
if word.lower() not in uninteresting_words:
final_list.append(new_word)
for word in final_list:
if word not in frequencies:
frequencies[word] = 0
frequencies[word] += 1
return frequencies
I'm trying to make a word cloud. I need to strip a txt file of uninteresting words and punctuations. The grader just isn't giving me any feedback. I think my script removes some extra words and I can't figure out why. Can someone point me in the right direction?
punctuations = '''!()-[]{};:'"\,<>./?@#$%^&*_~'''
uninteresting_words = ["the", "a", "to", "if", "is", "it", "of", "and", "or", "an", "as", "i", "me", "my", \
"we", "our", "ours", "you", "your", "yours", "he", "she", "him", "his", "her", "hers", "its", "they", "them", \
"their", "what", "which", "who", "whom", "this", "that", "am", "are", "was", "were", "be", "been", "being", \
"have", "has", "had", "do", "does", "did", "but", "at", "by", "with", "from", "here", "when", "where", "how", \
"all", "any", "both", "each", "few", "more", "some", "such", "no", "nor", "too", "very", "can", "will", "just"]
def count(file_contents):
frequencies = {}
word_list = file_contents.split()
final_list = []
#remove all uninteresting words
for word in word_list:
new_word = ""
for character in word:
if character not in punctuations and character.isalpha():
new_word += character
if word.lower() not in uninteresting_words:
final_list.append(new_word)
for word in final_list:
if word not in frequencies:
frequencies[word] = 0
frequencies[word] += 1
return frequencies
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我不知道我是否没有错误的答案,因为我将其上传错误,或者是否被漏了,因为我以两种不同的方式提交了此代码。我直接单击的一个提交作业;另一个,我下载了笔记本并通过Coursera提交。这奏效了,它给了我正确的答案。无论如何,这是正确的代码。
I don't know if I wasn't getting the answer wrong because I was uploading it wrong or if coursera was bugged because I submitted this code twice in two different ways. One I directly clicked submit assignment through; The other, I downloaded the notebook and submitted through coursera. This worked and it gave me the correct answer. Regardless, this is correct code.