我试图制作一个python脚本,该脚本从txt文件中删除不必要的单词和标点符号。但是,对于分级机还不够好吗?

发布于 2025-02-10 05:24:57 字数 1340 浏览 3 评论 0原文

我正在尝试制作一个单词云。我需要剥离一个无趣单词和标点符号的TXT文件。分级器只是没有给我任何反馈。我认为我的脚本消除了一些额外的单词,我不知道为什么。有人可以将我指向正确的方向吗?

punctuations = '''!()-[]{};:'"\,<>./?@#$%^&*_~'''
uninteresting_words = ["the", "a", "to", "if", "is", "it", "of", "and", "or", "an", "as", "i", "me", "my", \
"we", "our", "ours", "you", "your", "yours", "he", "she", "him", "his", "her", "hers", "its", "they", "them", \
"their", "what", "which", "who", "whom", "this", "that", "am", "are", "was", "were", "be", "been", "being", \
"have", "has", "had", "do", "does", "did", "but", "at", "by", "with", "from", "here", "when", "where", "how", \
"all", "any", "both", "each", "few", "more", "some", "such", "no", "nor", "too", "very", "can", "will", "just"]

def count(file_contents):
    frequencies = {}
    word_list = file_contents.split()
    final_list = []
    #remove all uninteresting words
    for word in word_list:
    
        new_word = ""
        for character in word:
            if character not in punctuations and character.isalpha():
                new_word += character
            
        if word.lower() not in uninteresting_words:
            final_list.append(new_word)
        
    for word in final_list:
        if word not in frequencies:
            frequencies[word] = 0 
        frequencies[word] += 1
    return frequencies

I'm trying to make a word cloud. I need to strip a txt file of uninteresting words and punctuations. The grader just isn't giving me any feedback. I think my script removes some extra words and I can't figure out why. Can someone point me in the right direction?

punctuations = '''!()-[]{};:'"\,<>./?@#$%^&*_~'''
uninteresting_words = ["the", "a", "to", "if", "is", "it", "of", "and", "or", "an", "as", "i", "me", "my", \
"we", "our", "ours", "you", "your", "yours", "he", "she", "him", "his", "her", "hers", "its", "they", "them", \
"their", "what", "which", "who", "whom", "this", "that", "am", "are", "was", "were", "be", "been", "being", \
"have", "has", "had", "do", "does", "did", "but", "at", "by", "with", "from", "here", "when", "where", "how", \
"all", "any", "both", "each", "few", "more", "some", "such", "no", "nor", "too", "very", "can", "will", "just"]

def count(file_contents):
    frequencies = {}
    word_list = file_contents.split()
    final_list = []
    #remove all uninteresting words
    for word in word_list:
    
        new_word = ""
        for character in word:
            if character not in punctuations and character.isalpha():
                new_word += character
            
        if word.lower() not in uninteresting_words:
            final_list.append(new_word)
        
    for word in final_list:
        if word not in frequencies:
            frequencies[word] = 0 
        frequencies[word] += 1
    return frequencies

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

墟烟 2025-02-17 05:24:57

我不知道我是否没有错误的答案,因为我将其上传错误,或者是否被漏了,因为我以两种不同的方式提交了此代码。我直接单击的一个提交作业;另一个,我下载了笔记本并通过Coursera提交。这奏效了,它给了我正确的答案。无论如何,这是正确的代码。

# Here is a list of punctuations and uninteresting words you can use to process your text
punctuations = '''!()-[]{};:'"\,<>./?@#$%^&*_~'''
uninteresting_words = ["the", "a", "to", "if", "is", "it", "of", "and","or", "an","in", "as", "i", "me", "my", \
"we", "our", "ours", "you", "your", "yours", "he", "she", "him", "his", "her", "hers", "its", "they", "them", \
"their", "what", "which", "who", "whom", "this", "that", "am", "are", "was", "were", "be", "been", "being", \
"have", "has", "had", "do", "does", "did", "but", "at", "by", "with", "from", "here", "when", "where", "how", \
"all", "any", "both", "each", "few", "more", "some", "such", "no", "nor", "too", "very", "can", "will", "just"]

# LEARNER CODE START HERE

file_no_punct = ""

#remove all punctuation
for char in file_contents:
    if char.isalpha() == True or char.isspace():
        file_no_punct += char
            

boring_list = file_no_punct.split()
zesty_list =[]
#remove all uninteresting words
for word in boring_list:
    if word.lower() not in uninteresting_words and word.isalpha()==True:
        zesty_list.append(word)
        
frequencies = {}
for word in zesty_list:
    if word not in frequencies:
        frequencies[word] = 0 
    frequencies[word] += 1

I don't know if I wasn't getting the answer wrong because I was uploading it wrong or if coursera was bugged because I submitted this code twice in two different ways. One I directly clicked submit assignment through; The other, I downloaded the notebook and submitted through coursera. This worked and it gave me the correct answer. Regardless, this is correct code.

# Here is a list of punctuations and uninteresting words you can use to process your text
punctuations = '''!()-[]{};:'"\,<>./?@#$%^&*_~'''
uninteresting_words = ["the", "a", "to", "if", "is", "it", "of", "and","or", "an","in", "as", "i", "me", "my", \
"we", "our", "ours", "you", "your", "yours", "he", "she", "him", "his", "her", "hers", "its", "they", "them", \
"their", "what", "which", "who", "whom", "this", "that", "am", "are", "was", "were", "be", "been", "being", \
"have", "has", "had", "do", "does", "did", "but", "at", "by", "with", "from", "here", "when", "where", "how", \
"all", "any", "both", "each", "few", "more", "some", "such", "no", "nor", "too", "very", "can", "will", "just"]

# LEARNER CODE START HERE

file_no_punct = ""

#remove all punctuation
for char in file_contents:
    if char.isalpha() == True or char.isspace():
        file_no_punct += char
            

boring_list = file_no_punct.split()
zesty_list =[]
#remove all uninteresting words
for word in boring_list:
    if word.lower() not in uninteresting_words and word.isalpha()==True:
        zesty_list.append(word)
        
frequencies = {}
for word in zesty_list:
    if word not in frequencies:
        frequencies[word] = 0 
    frequencies[word] += 1
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文