将未标记语料库转换为标记语料库 (NLTK)
我有一个纯文本语料库,我想标记并保存它,以便我可以进一步使用它。最好的方法是什么?
我已经制作了标记器,但我无法找到一种方法来更改不混乱的语料库
I have a plaintext corpora, that I want to tag and save, so I can use it further. What's the best way to do this?
I already have my tagger made, but I can't figure out a way to change the corpora that isn't messy
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
查看其他标记语料库,例如 Brown,以获取输出示例。这将使您了解标记的语料库应该是什么样子。接下来,加载语料库(使用
PlaintextCorpusReader
)并迭代句子,标记每个句子。然后通过从标记句子中创建一个字符串,将每个标记句子写入文件,如' '.join([tuple2str(t) for t in tagged_sent])
(在执行from 之后nltk.tag.util导入tuple2str
)。如果您的代码“混乱”也没关系,只要它能正确完成工作即可。您在这里并不是在寻找优雅的算法,而是运行一个非常具体的脚本来创建自定义语料库。Take a look at other tagged corpora, like brown, for output examples. This will give you an idea of what a tagged corpus should look like. Next, load your corpus (with the
PlaintextCorpusReader
) and iterate over the sentences, tagging each sentence. Then write each tagged sentence to a file by making a string from the tagged sentence, as in' '.join([tuple2str(t) for t in tagged_sent])
(after you dofrom nltk.tag.util import tuple2str
). And it's ok if your code is "messy" as long it does the job correctly. You're not looking for an elegant algorithm here, you're running a very specific script to create a custom corpus.您是在进行简单的一元标记,还是在实际解析文本?我相信 NLTK 解析/标记每个令牌的输出都是(令牌,PoS)。元组数组是否不能用于存储您的语料库?为什么你觉得这么乱?
Are you doing simple unigram tagging, or are you actually parsing the text? I believe NLTK parses/tags such that the output of every token is (token, PoS). Is an array of tuples unacceptable for storing your corpora? Why do you find this messy?