忽略Python字典中的重复单词

发布于 2024-10-28 00:19:35 字数 1114 浏览 8 评论 0原文

我有一个 Python 脚本,它接受“.html”文件,删除停用词并返回 python 字典中的所有其他单词。但如果同一个单词出现在多个文件中,我希望它只返回一次。即包含不间断的单词,每个单词仅一次。

def run():
filelist = os.listdir(path)
regex = re.compile(r'.*<div class="body">(.*?)</div>.*', re.DOTALL | re.IGNORECASE)
reg1 = re.compile(r'<\/?[ap][^>]*>', re.DOTALL | re.IGNORECASE)
quotereg = re.compile(r'&quot;', re.DOTALL | re.IGNORECASE)
puncreg = re.compile(r'[^\w]', re.DOTALL | re.IGNORECASE)
f = open(stopwordfile, 'r')
stopwords = f.read().lower().split()
filewords = {}

htmlfiles = []
for file in filelist:
    if file[-5:] == '.html':
        htmlfiles.append(file)
        totalfreq = {}


for file in htmlfiles:
    f = open(path + file, 'r')
    words = f.read().lower()
    words = regex.findall(words)[0]
    words = quotereg.sub(' ', words)
    words = reg1.sub(' ', words)
    words = puncreg.sub(' ', words)
    words = words.strip().split()

    for w in stopwords:
        while w in words:
            words.remove(w)


    freq = {}
    for w in words:
            words=words
    print words

if __name__ == '__main__':
run()

I have a Python script that takes in '.html' files removes stop words and returns all other words in a python dictionary. But if the same word occurs in multiple files I want it to return only once. i.e. contain non-stop words, each only once.

def run():
filelist = os.listdir(path)
regex = re.compile(r'.*<div class="body">(.*?)</div>.*', re.DOTALL | re.IGNORECASE)
reg1 = re.compile(r'<\/?[ap][^>]*>', re.DOTALL | re.IGNORECASE)
quotereg = re.compile(r'"', re.DOTALL | re.IGNORECASE)
puncreg = re.compile(r'[^\w]', re.DOTALL | re.IGNORECASE)
f = open(stopwordfile, 'r')
stopwords = f.read().lower().split()
filewords = {}

htmlfiles = []
for file in filelist:
    if file[-5:] == '.html':
        htmlfiles.append(file)
        totalfreq = {}


for file in htmlfiles:
    f = open(path + file, 'r')
    words = f.read().lower()
    words = regex.findall(words)[0]
    words = quotereg.sub(' ', words)
    words = reg1.sub(' ', words)
    words = puncreg.sub(' ', words)
    words = words.strip().split()

    for w in stopwords:
        while w in words:
            words.remove(w)


    freq = {}
    for w in words:
            words=words
    print words

if __name__ == '__main__':
run()

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

仙气飘飘 2024-11-04 00:19:35

使用集合。只需将您找到的每个单词添加到集合中即可;它忽略重复项。

假设您有一个返回文件中每个单词的迭代器(这是针对纯文本的;HTML 会相当复杂):

def words(filename):
    with open(filename) as wordfile:
        for line in wordfile:
            for word in line.split():
                yield word

然后将它们放入 set 中很简单:

wordlist = set(words("words.txt"))

如果您有多个文件,只需这样做:

wordlist = set()
wordfiles = ["words1.txt", "words2.txt", "words3.txt"]

for wordfile in wordfiles:
    wordlist |= set(words(wordfile))

您还可以使用一组停用词。然后,您可以简单地从单词列表中减去它们,这可能比在添加之前检查每个单词是否是停用词更快。

stopwords = set(["a", "an", "the"])
wordlist -= stopwords

Use a set. Simply add every word you find to the set; it ignores duplicates.

Assuming you have an iterator that returns each word in a file (this is for plain text; HTML would be rather more complicated):

def words(filename):
    with open(filename) as wordfile:
        for line in wordfile:
            for word in line.split():
                yield word

Then getting them into a set is simple:

wordlist = set(words("words.txt"))

If you have multiple files, just do like so:

wordlist = set()
wordfiles = ["words1.txt", "words2.txt", "words3.txt"]

for wordfile in wordfiles:
    wordlist |= set(words(wordfile))

You can also use a set for your stop words. Then you can simply subtract them from the word list after the fact, which will probably be faster than checking to see if each word is a stop word before adding.

stopwords = set(["a", "an", "the"])
wordlist -= stopwords
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文