NLTK word_tokenize 返回空
我正在尝试将文本文档中的单词和句子归为单词和句子,但两者都空了。您能检查并分享我为什么看到这个吗?
Please find the code below (not attaching the text document as it is large (443 KB):
f = open('txt_link.txt','r',errors='ignore')
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
from nltk.tokenize import word_tokenize
raw_doc = f.read()
raw_doc = raw_doc.lower() #converts text to lowercase
sent_tokens = nltk.sent_tokenize(raw_doc)
word_tokens = nltk.word_tokenize(raw_doc)
word_tokens[:2]
sent_tokens[:2]
[]
Thank you
I am trying to tokenize the words and sentences in a text document but it is returing empty for both. Could you please check and share why am I seeing this?
Please find the code below (not attaching the text document as it is large (443 KB):
f = open('txt_link.txt','r',errors='ignore')
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
from nltk.tokenize import word_tokenize
raw_doc = f.read()
raw_doc = raw_doc.lower() #converts text to lowercase
sent_tokens = nltk.sent_tokenize(raw_doc)
word_tokens = nltk.word_tokenize(raw_doc)
word_tokens[:2]
sent_tokens[:2]
[]
Thank you
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论