寻找来自 wikipedia 的 n-gram 数据库

发布于 2024-08-23 03:39:51 字数 350 浏览 5 评论 0原文

我正在有效地尝试解决与此问题相同的问题:

查找与特定单词相关的单词(特别是物理对象)

减去单词代表物理对象的要求。答案和编辑后的问题似乎表明,一个好的开始是使用维基百科文本作为语料库构建 n-gram 频率列表。在我开始下载庞大的维基百科转储之前,有谁知道这样的列表是否已经存在?

PS如果上一个问题的原始发布者看到了这个,我很想知道你是如何解决这个问题的,因为你的结果看起来很棒:-)

I am effectively trying to solve the same problem as this question:

Finding related words (specifically physical objects) to a specific word

minus the requirement that words represent physical objects. The answers and edited question seem to indicate that a good start is building a list of frequency of n-grams using wikipedia text as a corpus. Before I start downloading the mammoth wikipedia dump, does anyone know if such a list already exists?

PS if the original poster of the previous question sees this, I would love to know how you went about solving the problem, as your results seem excellent :-)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

子栖 2024-08-30 03:39:51

Google 有一个公开的 TB n-garam 数据库(最多至 5)。
您可以订购 6 张 DVD,也可以找到包含该内容的 torrent。

Google has a publicly available terabyte n-garam database (up to 5).
You can order it in 6 DVDs or find a torrent that hosts it.

下雨或天晴 2024-08-30 03:39:51

您可以此处找到 2008 年 6 月的维基百科 n-gram。此外,它还有中心词和标记句子。我尝试创建自己的 n 元语法,但双元语法耗尽了内存 (32Gb)(当前的英文维基百科非常庞大)。提取 xml 也花费了大约 8 小时,一元语法花费了 5 小时,二元语法花费了 8 小时。

由于 mediawiki 和 Wikipedia 的文本之间存在大量垃圾,因此链接的 n-gram 还具有经过一定程度清理的优点。

这是我的 Python 代码:

from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
from nltk.tokenize import wordpunct_tokenize
from datetime import datetime
from collections import deque
from collections import defaultdict
from collections import OrderedDict
import operator
import os

# Loop through all the English Wikipedia Article files and store their path and filename in a list. 4 minutes.
dir = r'D:\Downloads\Wikipedia\articles'
l = [os.path.join(root, name) for root, _, files in os.walk(dir) for name in files]

t1 = datetime.now()

# For each article (file) loop through all the words and generate unigrams. 1175MB memory use spotted.
# 12 minutes to first output. 4200000: 4:37:24.586706 was last output.
c = 1
d1s = defaultdict(int)
for file in l:
    try:
        with open(file, encoding="utf8") as f_in:
            content = f_in.read()
    except:
        with open(file, encoding="latin-1") as f_in:
            content = f_in.read()        
    words = wordpunct_tokenize(content)    # word_tokenize handles 'n ʼn and ʼn as a single word. wordpunct_tokenize does not.
    # Take all the words from the sentence and count them.
    for i, word in enumerate(words):    
        d1s[word] = d1s[word] + 1   
    c = c + 1
    if c % 200000 == 0:
        t2 = datetime.now()
        print(str(c) + ': ' + str(t2 - t1))

t2 = datetime.now()
print('After unigram: ' + str(t2 - t1))

t1 = datetime.now()
# Sort the defaultdict in descending order and write the unigrams to a file.
# 0:00:27.740082 was output. 3285Mb memory. 165Mb output file.
d1ss = OrderedDict(sorted(d1s.items(), key=operator.itemgetter(1), reverse=True))
with open("D:\\Downloads\\Wikipedia\\en_ngram1.txt", mode="w", encoding="utf-8") as f_out:
    for k, v in d1ss.items():
        f_out.write(k + '┼' + str(v) + "\n")
t2 = datetime.now()
print('After unigram write: ' + str(t2 - t1))

# Determine the lowest 1gram count we are interested in.
low_count = 20 - 1
d1s = {}
# Get all the 1gram counts as a dict.
for word, count in d1ss.items():
    # Stop adding 1gram counts when we reach the lowest 1gram count.
    if count == low_count:
        break
    # Add the count to the dict.
    d1s[word] = count

t1 = datetime.now()

# For each article (file) loop through all the sentences and generate 2grams. 13GB memory use spotted.
# 17 minutes to first output. 4200000: 4:37:24.586706 was last output.
c = 1
d2s = defaultdict(int)
for file in l:
    try:
        with open(file, encoding="utf8") as f_in:
            content = f_in.read()
    except:
        with open(file, encoding="latin-1") as f_in:
            content = f_in.read()   
    # Extract the sentences in the file content.         
    sentences = deque()
    sentences.extend(sent_tokenize(content))            
    # Get all the words for one sentence.
    for sentence in sentences:        
        words = wordpunct_tokenize(sentence)    # word_tokenize handles 'n ʼn and ʼn as a single word. wordpunct_tokenize does not.
        # Take all the words from the sentence with high 1gram count that are next to each other and count them.
        for i, word in enumerate(words):    
            if word in d1s:
                try:
                    word2 = words[i+1]
                    if word2 in d1s:
                        gram2 = word + ' ' + word2
                        d2s[gram2] = d2s[gram2] + 1
                except:
                    pass
    c = c + 1
    if c % 200000 == 0:
        t2 = datetime.now()
        print(str(c) + ': ' + str(t2 - t1))

t2 = datetime.now()
print('After bigram: ' + str(t2 - t1))

You can find the June 2008 Wikipedia n-grams here. In addition it also has headwords and tagged sentences. I tried to create my own n-grams, but ran out of memory (32Gb) on the bigrams (the current English Wikipedia is massive). It also took about 8 hours to extract the xml, 5 hours for unigrams and 8 hours for bigrams.

The linked n-grams also has the benefit of having been cleaned somewhat since mediawiki and Wikipedia has a lot of junk in between the text.

Here's my Python code:

from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
from nltk.tokenize import wordpunct_tokenize
from datetime import datetime
from collections import deque
from collections import defaultdict
from collections import OrderedDict
import operator
import os

# Loop through all the English Wikipedia Article files and store their path and filename in a list. 4 minutes.
dir = r'D:\Downloads\Wikipedia\articles'
l = [os.path.join(root, name) for root, _, files in os.walk(dir) for name in files]

t1 = datetime.now()

# For each article (file) loop through all the words and generate unigrams. 1175MB memory use spotted.
# 12 minutes to first output. 4200000: 4:37:24.586706 was last output.
c = 1
d1s = defaultdict(int)
for file in l:
    try:
        with open(file, encoding="utf8") as f_in:
            content = f_in.read()
    except:
        with open(file, encoding="latin-1") as f_in:
            content = f_in.read()        
    words = wordpunct_tokenize(content)    # word_tokenize handles 'n ʼn and ʼn as a single word. wordpunct_tokenize does not.
    # Take all the words from the sentence and count them.
    for i, word in enumerate(words):    
        d1s[word] = d1s[word] + 1   
    c = c + 1
    if c % 200000 == 0:
        t2 = datetime.now()
        print(str(c) + ': ' + str(t2 - t1))

t2 = datetime.now()
print('After unigram: ' + str(t2 - t1))

t1 = datetime.now()
# Sort the defaultdict in descending order and write the unigrams to a file.
# 0:00:27.740082 was output. 3285Mb memory. 165Mb output file.
d1ss = OrderedDict(sorted(d1s.items(), key=operator.itemgetter(1), reverse=True))
with open("D:\\Downloads\\Wikipedia\\en_ngram1.txt", mode="w", encoding="utf-8") as f_out:
    for k, v in d1ss.items():
        f_out.write(k + '┼' + str(v) + "\n")
t2 = datetime.now()
print('After unigram write: ' + str(t2 - t1))

# Determine the lowest 1gram count we are interested in.
low_count = 20 - 1
d1s = {}
# Get all the 1gram counts as a dict.
for word, count in d1ss.items():
    # Stop adding 1gram counts when we reach the lowest 1gram count.
    if count == low_count:
        break
    # Add the count to the dict.
    d1s[word] = count

t1 = datetime.now()

# For each article (file) loop through all the sentences and generate 2grams. 13GB memory use spotted.
# 17 minutes to first output. 4200000: 4:37:24.586706 was last output.
c = 1
d2s = defaultdict(int)
for file in l:
    try:
        with open(file, encoding="utf8") as f_in:
            content = f_in.read()
    except:
        with open(file, encoding="latin-1") as f_in:
            content = f_in.read()   
    # Extract the sentences in the file content.         
    sentences = deque()
    sentences.extend(sent_tokenize(content))            
    # Get all the words for one sentence.
    for sentence in sentences:        
        words = wordpunct_tokenize(sentence)    # word_tokenize handles 'n ʼn and ʼn as a single word. wordpunct_tokenize does not.
        # Take all the words from the sentence with high 1gram count that are next to each other and count them.
        for i, word in enumerate(words):    
            if word in d1s:
                try:
                    word2 = words[i+1]
                    if word2 in d1s:
                        gram2 = word + ' ' + word2
                        d2s[gram2] = d2s[gram2] + 1
                except:
                    pass
    c = c + 1
    if c % 200000 == 0:
        t2 = datetime.now()
        print(str(c) + ': ' + str(t2 - t1))

t2 = datetime.now()
print('After bigram: ' + str(t2 - t1))
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文