使用 NLTK 创建新语料库

发布于 2024-10-16 14:13:39 字数 668 浏览 9 评论 0原文

我认为我的标题的答案通常是去阅读文档,但我浏览了 NLTK 书 但它没有给出答案。我对 Python 有点陌生。

我有一堆 .txt 文件,我希望能够使用 NLTK 为语料库 nltk_data 提供的语料库函数。

我尝试过 PlaintextCorpusReader 但我无法进一步了解:

>>>import nltk
>>>from nltk.corpus import PlaintextCorpusReader
>>>corpus_root = './'
>>>newcorpus = PlaintextCorpusReader(corpus_root, '.*')
>>>newcorpus.words()

How do I split the newcorpus Sentence using punkt?我尝试使用 punkt 函数,但 punkt 函数无法读取 PlaintextCorpusReader 类?

您还可以指导我如何将分段数据写入文本文件吗?

I reckoned that often the answer to my title is to go and read the documentations, but I ran through the NLTK book but it doesn't give the answer. I'm kind of new to Python.

I have a bunch of .txt files and I want to be able to use the corpus functions that NLTK provides for the corpus nltk_data.

I've tried PlaintextCorpusReader but I couldn't get further than:

>>>import nltk
>>>from nltk.corpus import PlaintextCorpusReader
>>>corpus_root = './'
>>>newcorpus = PlaintextCorpusReader(corpus_root, '.*')
>>>newcorpus.words()

How do I segment the newcorpus sentences using punkt? I tried using the punkt functions but the punkt functions couldn't read PlaintextCorpusReader class?

Can you also lead me to how I can write the segmented data into text files?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

揽月 2024-10-23 14:13:39

经过几年的了解它是如何工作的,这里有

如何使用文本文件目录创建 NLTK 语料库?

的更新教程,主要思想是利用 nltk.corpus.reader 包。如果您有一个英语文本文件目录,最好使用PlaintextCorpusReader

如果您有一个如下所示的目录:

newcorpus/
         file1.txt
         file2.txt
         ...

只需使用这些代码行,您就可以获得一个语料库:

import os
from nltk.corpus.reader.plaintext import PlaintextCorpusReader

corpusdir = 'newcorpus/' # Directory of corpus.

newcorpus = PlaintextCorpusReader(corpusdir, '.*')

注意:PlaintextCorpusReader将使用默认的nltk .tokenize.sent_tokenize()nltk.tokenize.word_tokenize() 将文本拆分为句子和单词,这些函数是为英语构建的,可能 适用于所有语言。

以下是创建测试文本文件、如何使用 NLTK 创建语料库以及如何访问不同级别的语料库

import os
from nltk.corpus.reader.plaintext import PlaintextCorpusReader

# Let's create a corpus with 2 texts in different textfile.
txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
txt2 = """Are you a foo bar? Yes I am. Possibly, everyone is.\n"""
corpus = [txt1,txt2]

# Make new dir for the corpus.
corpusdir = 'newcorpus/'
if not os.path.isdir(corpusdir):
    os.mkdir(corpusdir)

# Output the files into the directory.
filename = 0
for text in corpus:
    filename+=1
    with open(corpusdir+str(filename)+'.txt','w') as fout:
        print>>fout, text

# Check that our corpus do exist and the files are correct.
assert os.path.isdir(corpusdir)
for infile, text in zip(sorted(os.listdir(corpusdir)),corpus):
    assert open(corpusdir+infile,'r').read().strip() == text.strip()


# Create a new corpus by specifying the parameters
# (1) directory of the new corpus
# (2) the fileids of the corpus
# NOTE: in this case the fileids are simply the filenames.
newcorpus = PlaintextCorpusReader('newcorpus/', '.*')

# Access each file in the corpus.
for infile in sorted(newcorpus.fileids()):
    print infile # The fileids of each file.
    with newcorpus.open(infile) as fin: # Opens the file.
        print fin.read().strip() # Prints the content of the file
print

# Access the plaintext; outputs pure string/basestring.
print newcorpus.raw().strip()
print 

# Access paragraphs in the corpus. (list of list of list of strings)
# NOTE: NLTK automatically calls nltk.tokenize.sent_tokenize and 
#       nltk.tokenize.word_tokenize.
#
# Each element in the outermost list is a paragraph, and
# Each paragraph contains sentence(s), and
# Each sentence contains token(s)
print newcorpus.paras()
print

# To access pargraphs of a specific fileid.
print newcorpus.paras(newcorpus.fileids()[0])

# Access sentences in the corpus. (list of list of strings)
# NOTE: That the texts are flattened into sentences that contains tokens.
print newcorpus.sents()
print

# To access sentences of a specific fileid.
print newcorpus.sents(newcorpus.fileids()[0])

# Access just tokens/words in the corpus. (list of strings)
print newcorpus.words()

# To access tokens of a specific fileid.
print newcorpus.words(newcorpus.fileids()[0])

的完整代码:最后,要读取文本目录并以其他语言创建 NLTK 语料库,您必须首先确保您有一个Python可调用的单词标记化句子标记化模块,它接受字符串/基本字符串输入并产生这样的输出:

>>> from nltk.tokenize import sent_tokenize, word_tokenize
>>> txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
>>> sent_tokenize(txt1)
['This is a foo bar sentence.', 'And this is the first txtfile in the corpus.']
>>> word_tokenize(sent_tokenize(txt1)[0])
['This', 'is', 'a', 'foo', 'bar', 'sentence', '.']

After some years of figuring out how it works, here's the updated tutorial of

How to create an NLTK corpus with a directory of textfiles?

The main idea is to make use of the nltk.corpus.reader package. In the case that you have a directory of textfiles in English, it's best to use the PlaintextCorpusReader.

If you have a directory that looks like this:

newcorpus/
         file1.txt
         file2.txt
         ...

Simply use these lines of code and you can get a corpus:

import os
from nltk.corpus.reader.plaintext import PlaintextCorpusReader

corpusdir = 'newcorpus/' # Directory of corpus.

newcorpus = PlaintextCorpusReader(corpusdir, '.*')

NOTE: that the PlaintextCorpusReader will use the default nltk.tokenize.sent_tokenize() and nltk.tokenize.word_tokenize() to split your texts into sentences and words and these functions are build for English, it may NOT work for all languages.

Here's the full code with creation of test textfiles and how to create a corpus with NLTK and how to access the corpus at different levels:

import os
from nltk.corpus.reader.plaintext import PlaintextCorpusReader

# Let's create a corpus with 2 texts in different textfile.
txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
txt2 = """Are you a foo bar? Yes I am. Possibly, everyone is.\n"""
corpus = [txt1,txt2]

# Make new dir for the corpus.
corpusdir = 'newcorpus/'
if not os.path.isdir(corpusdir):
    os.mkdir(corpusdir)

# Output the files into the directory.
filename = 0
for text in corpus:
    filename+=1
    with open(corpusdir+str(filename)+'.txt','w') as fout:
        print>>fout, text

# Check that our corpus do exist and the files are correct.
assert os.path.isdir(corpusdir)
for infile, text in zip(sorted(os.listdir(corpusdir)),corpus):
    assert open(corpusdir+infile,'r').read().strip() == text.strip()


# Create a new corpus by specifying the parameters
# (1) directory of the new corpus
# (2) the fileids of the corpus
# NOTE: in this case the fileids are simply the filenames.
newcorpus = PlaintextCorpusReader('newcorpus/', '.*')

# Access each file in the corpus.
for infile in sorted(newcorpus.fileids()):
    print infile # The fileids of each file.
    with newcorpus.open(infile) as fin: # Opens the file.
        print fin.read().strip() # Prints the content of the file
print

# Access the plaintext; outputs pure string/basestring.
print newcorpus.raw().strip()
print 

# Access paragraphs in the corpus. (list of list of list of strings)
# NOTE: NLTK automatically calls nltk.tokenize.sent_tokenize and 
#       nltk.tokenize.word_tokenize.
#
# Each element in the outermost list is a paragraph, and
# Each paragraph contains sentence(s), and
# Each sentence contains token(s)
print newcorpus.paras()
print

# To access pargraphs of a specific fileid.
print newcorpus.paras(newcorpus.fileids()[0])

# Access sentences in the corpus. (list of list of strings)
# NOTE: That the texts are flattened into sentences that contains tokens.
print newcorpus.sents()
print

# To access sentences of a specific fileid.
print newcorpus.sents(newcorpus.fileids()[0])

# Access just tokens/words in the corpus. (list of strings)
print newcorpus.words()

# To access tokens of a specific fileid.
print newcorpus.words(newcorpus.fileids()[0])

Finally, to read a directory of texts and create an NLTK corpus in another languages, you must first ensure that you have a python-callable word tokenization and sentence tokenization modules that takes string/basestring input and produces such output:

>>> from nltk.tokenize import sent_tokenize, word_tokenize
>>> txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
>>> sent_tokenize(txt1)
['This is a foo bar sentence.', 'And this is the first txtfile in the corpus.']
>>> word_tokenize(sent_tokenize(txt1)[0])
['This', 'is', 'a', 'foo', 'bar', 'sentence', '.']
夏花。依旧 2024-10-23 14:13:39

我认为 PlaintextCorpusReader 已经使用 punkt 分词器对输入进行了分段,至少如果您的输入语言是英语的话。

PlainTextCorpusReader 的构造函数

def __init__(self, root, fileids,
             word_tokenizer=WordPunctTokenizer(),
             sent_tokenizer=nltk.data.LazyLoader(
                 'tokenizers/punkt/english.pickle'),
             para_block_reader=read_blankline_block,
             encoding='utf8'):

可以向读者传递单词和句子标记器,但对于后者,默认值已经是 nltk.data.LazyLoader('tokenizers/punkt/english.pickle')。

对于单个字符串,分词器将按如下方式使用(此处解释,有关 punkt 分词器,请参阅第 5 节)。

>>> import nltk.data
>>> text = """
... Punkt knows that the periods in Mr. Smith and Johann S. Bach
... do not mark sentence boundaries.  And sometimes sentences
... can start with non-capitalized words.  i is a good variable
... name.
... """
>>> tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
>>> tokenizer.tokenize(text.strip())

I think the PlaintextCorpusReader already segments the input with a punkt tokenizer, at least if your input language is english.

PlainTextCorpusReader's constructor

def __init__(self, root, fileids,
             word_tokenizer=WordPunctTokenizer(),
             sent_tokenizer=nltk.data.LazyLoader(
                 'tokenizers/punkt/english.pickle'),
             para_block_reader=read_blankline_block,
             encoding='utf8'):

You can pass the reader a word and sentence tokenizer, but for the latter the default already is nltk.data.LazyLoader('tokenizers/punkt/english.pickle').

For a single string, a tokenizer would be used as follows (explained here, see section 5 for punkt tokenizer).

>>> import nltk.data
>>> text = """
... Punkt knows that the periods in Mr. Smith and Johann S. Bach
... do not mark sentence boundaries.  And sometimes sentences
... can start with non-capitalized words.  i is a good variable
... name.
... """
>>> tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
>>> tokenizer.tokenize(text.strip())
Saygoodbye 2024-10-23 14:13:39
 >>> import nltk
 >>> from nltk.corpus import PlaintextCorpusReader
 >>> corpus_root = './'
 >>> newcorpus = PlaintextCorpusReader(corpus_root, '.*')
 """
 if the ./ dir contains the file my_corpus.txt, then you 
 can view say all the words it by doing this 
 """
 >>> newcorpus.words('my_corpus.txt')
 >>> import nltk
 >>> from nltk.corpus import PlaintextCorpusReader
 >>> corpus_root = './'
 >>> newcorpus = PlaintextCorpusReader(corpus_root, '.*')
 """
 if the ./ dir contains the file my_corpus.txt, then you 
 can view say all the words it by doing this 
 """
 >>> newcorpus.words('my_corpus.txt')
满意归宿 2024-10-23 14:13:39
from nltk.corpus.reader.plaintext import PlaintextCorpusReader


filecontent1 = "This is a cow"
filecontent2 = "This is a Dog"

corpusdir = 'nltk_data/'
with open(corpusdir + 'content1.txt', 'w') as text_file:
    text_file.write(filecontent1)
with open(corpusdir + 'content2.txt', 'w') as text_file:
    text_file.write(filecontent2)

text_corpus = PlaintextCorpusReader(corpusdir, ["content1.txt", "content2.txt"])

no_of_words_corpus1 = len(text_corpus.words("content1.txt"))
print(no_of_words_corpus1)
no_of_unique_words_corpus1 = len(set(text_corpus.words("content1.txt")))

no_of_words_corpus2 = len(text_corpus.words("content2.txt"))
no_of_unique_words_corpus2 = len(set(text_corpus.words("content2.txt")))

enter code here
from nltk.corpus.reader.plaintext import PlaintextCorpusReader


filecontent1 = "This is a cow"
filecontent2 = "This is a Dog"

corpusdir = 'nltk_data/'
with open(corpusdir + 'content1.txt', 'w') as text_file:
    text_file.write(filecontent1)
with open(corpusdir + 'content2.txt', 'w') as text_file:
    text_file.write(filecontent2)

text_corpus = PlaintextCorpusReader(corpusdir, ["content1.txt", "content2.txt"])

no_of_words_corpus1 = len(text_corpus.words("content1.txt"))
print(no_of_words_corpus1)
no_of_unique_words_corpus1 = len(set(text_corpus.words("content1.txt")))

no_of_words_corpus2 = len(text_corpus.words("content2.txt"))
no_of_unique_words_corpus2 = len(set(text_corpus.words("content2.txt")))

enter code here
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文