在 spacy 中保存和加载 nlp 结果

发布于 2025-01-13 15:24:45 字数 1017 浏览 0 评论 0原文

我想使用 SpaCy 分析许多小文本,并且想存储 nlp 结果以供进一步使用以节省处理时间。我在存储和加载包含词向量的spaCy文档中找到了代码但我收到错误,但找不到解决方法。我对 python 相当陌生。

在下面的代码中,我将 nlp 结果存储到文件中并尝试再次读取它。我可以编写第一个文件,但找不到第二个文件(词汇)。我还收到两个错误:DocVocab 未定义。

任何修复此问题或其他方法以达到相同结果的想法都非常受欢迎。

谢谢!

import spacy
nlp = spacy.load('en_core_web_md')
doc = nlp("He eats a green apple")
for token in doc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
            token.shape_, token.is_alpha, token.is_stop)

NLP_FName = "E:\\SaveTest.nlp"
doc.to_disk(NLP_FName)
Vocab_FName = "E:\\SaveTest.voc"
doc.vocab.to_disk(Vocab_FName)

#To read the data again:
idoc = Doc(Vocab()).from_disk(NLP_FName)
idoc.vocab.from_disk(Vocab_FName)

for token in idoc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
            token.shape_, token.is_alpha, token.is_stop)

I want to use SpaCy to analyze many small texts and I want to store the nlp results for further use to save processing time. I found code at Storing and Loading spaCy Documents Containing Word Vectors but I get an error and I cannot find how to fix it. I am fairly new to python.

In the following code, I store the nlp results to a file and try to read it again. I can write the first file but I do not find the second file (vocab). I also get two errors: that Doc and Vocab are not defined.

Any idea to fix this or another method to achieve the same result is more than welcomed.

Thanks!

import spacy
nlp = spacy.load('en_core_web_md')
doc = nlp("He eats a green apple")
for token in doc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
            token.shape_, token.is_alpha, token.is_stop)

NLP_FName = "E:\\SaveTest.nlp"
doc.to_disk(NLP_FName)
Vocab_FName = "E:\\SaveTest.voc"
doc.vocab.to_disk(Vocab_FName)

#To read the data again:
idoc = Doc(Vocab()).from_disk(NLP_FName)
idoc.vocab.from_disk(Vocab_FName)

for token in idoc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
            token.shape_, token.is_alpha, token.is_stop)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

罪歌 2025-01-20 15:24:45

我尝试了你的代码,遇到了一些小问题,我在下面的代码中修复了这些问题。

请注意,SaveTest.nlp 是一个二进制文件,其中包含您的文档信息和
SaveTest.voc 是一个包含所有 spacy 模型词汇信息(向量、字符串等)的文件夹。

我所做的更改:

  1. spacy.tokens 导入 Doc
  2. spacy.vocab 导入 Vocab
  3. 下载 en_core_web_md 模型使用以下命令:
python -m spacy download en_core_web_md

请注意,spacy 对于每种语言都有多个模型,通常您必须先下载它(通常是 smmdlg 型号)。 此处了解更多相关信息。

代码:

import spacy
from spacy.tokens import Doc
from spacy.vocab import Vocab

nlp = spacy.load('en_core_web_md')
doc = nlp("He eats a green apple")
for token in doc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
          token.shape_, token.is_alpha, token.is_stop)

NLP_FName = "E:\\SaveTest.nlp"
doc.to_disk(NLP_FName)
Vocab_FName = "E:\\SaveTest.voc"
doc.vocab.to_disk(Vocab_FName)

#To read the data again:
idoc = Doc(Vocab()).from_disk(NLP_FName)
idoc.vocab.from_disk(Vocab_FName)

for token in idoc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
          token.shape_, token.is_alpha, token.is_stop)

请告诉我这是否对您有帮助,如果没有,请将您的错误消息添加到您原来的问题中,以便我可以提供帮助。

I tried your code and I had a few minor issues which I fixed on the code below.

Note that SaveTest.nlp is a binary file with your doc info and
SaveTest.voc is a folder with all the spacy model vocab information (vectors, strings among other).

Changes I made:

  1. Import Doc class from spacy.tokens
  2. Import Vocab class from spacy.vocab
  3. Download en_core_web_md model using the following command:
python -m spacy download en_core_web_md

Please note that spacy has multiple models for each language, and usually you have to download it first (typically sm, md and lg models). Read more about it here.

Code:

import spacy
from spacy.tokens import Doc
from spacy.vocab import Vocab

nlp = spacy.load('en_core_web_md')
doc = nlp("He eats a green apple")
for token in doc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
          token.shape_, token.is_alpha, token.is_stop)

NLP_FName = "E:\\SaveTest.nlp"
doc.to_disk(NLP_FName)
Vocab_FName = "E:\\SaveTest.voc"
doc.vocab.to_disk(Vocab_FName)

#To read the data again:
idoc = Doc(Vocab()).from_disk(NLP_FName)
idoc.vocab.from_disk(Vocab_FName)

for token in idoc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
          token.shape_, token.is_alpha, token.is_stop)

Let me know if this is helpful to you, and if not, please add your error message to your original question so I can help.

梦纸 2025-01-20 15:24:45

执行此操作的有效方法是使用 DocBin 代替:https: //spacy.io/usage/ saving-loading#docs

改编自文档的示例(您可以使用 doc_bin.to/from_disk 而不是 to/from_bytes ):

import spacy
from spacy.tokens import DocBin

doc_bin = DocBin()
texts = ["Some text", "Lots of texts...", "..."]
nlp = spacy.load("en_core_web_sm")
for doc in nlp.pipe(texts):
    doc_bin.add(doc)

bytes_data = doc_bin.to_bytes()

# Deserialize later, e.g. in a new process
nlp = spacy.blank("en")
doc_bin = DocBin().from_bytes(bytes_data)
docs = list(doc_bin.get_docs(nlp.vocab))

The efficient way to do this is to use a DocBin instead: https://spacy.io/usage/saving-loading#docs

Example adapted from the docs (you can use doc_bin.to/from_disk instead of to/from_bytes):

import spacy
from spacy.tokens import DocBin

doc_bin = DocBin()
texts = ["Some text", "Lots of texts...", "..."]
nlp = spacy.load("en_core_web_sm")
for doc in nlp.pipe(texts):
    doc_bin.add(doc)

bytes_data = doc_bin.to_bytes()

# Deserialize later, e.g. in a new process
nlp = spacy.blank("en")
doc_bin = DocBin().from_bytes(bytes_data)
docs = list(doc_bin.get_docs(nlp.vocab))
青春如此纠结 2025-01-20 15:24:45

很难得到答案,但我尝试了你的代码,它对 DocBins 不起作用。我将我的代码粘贴到下面的导入部分

import spacy
from spacy.tokens import DocBin
from LanguageIdentifier import predict

import fitz
import glob
import os

from datetime import datetime

import logging

#English-Accuracy: en_core_web_trf
#French-Accuracy: fr_dep_news_trf
#German-Accuracy: de_dep_news_trf
#Multi Language-Accuracy: xx_sent_ud_sm

#DocBins
FRdoc_bin = DocBin (store_user_data=True,attrs=['ENT_TYPE','LEMMA','LIKE_EMAIL','LIKE_URL','LIKE_NUM','ORTH','POS'])
ENdoc_bin = DocBin (store_user_data=True,attrs=['ENT_TYPE','LEMMA','LIKE_EMAIL','LIKE_URL','LIKE_NUM','ORTH','POS'])
DEdoc_bin = DocBin (store_user_data=True,attrs=['ENT_TYPE','LEMMA','LIKE_EMAIL','LIKE_URL','LIKE_NUM','ORTH','POS'])
MULTIdoc_bin = DocBin (store_user_data=True,attrs=['ENT_TYPE','LEMMA','LIKE_EMAIL','LIKE_URL','LIKE_NUM','ORTH','POS'])


#NLP modules
frNLP = spacy.load('fr_dep_news_trf')
enNLP = spacy.load('en_core_web_trf') 
deNLP = spacy.load('de_dep_news_trf')
multiNLP = spacy.load('xx_sent_ud_sm')

ErroredFiles =[]

def processNLP(text):
    
    lang = predict(text) 
    if 'fr' in lang:
        doc = frNLP(text)
        FRdoc_bin.add(doc)
        return
    elif 'de' in lang:
        DEdoc_bin.add(deNLP(text))
        return
    elif 'en' in lang:        
        ENdoc_bin.add(enNLP(text))
        return
    else:
        MULTIdoc_bin.add(multiNLP(text))
        return


def get_text_from_pdf(Path):
    text = ''
    content = fitz.open(Path)
    for page in content:
        if page.number == 1:
            text = page.get_text()[212:]
        else:
            text = text + page.get_text()
    return text


FolderPath = r'C:\[Redacted]\DataSource\*\*.pdf'
PDFfiles = glob.glob(FolderPath)
counter = 0

for file in PDFfiles:    
    counter = counter +1
    try:
        textPDF = get_text_from_pdf(file)
        processNLP(textPDF)
        
    except Exception as e:        
        ErroredFiles.append(file)
        logging.error('Error with file '+ file)
        logging.error('Error message: '+ str(e))
        MULTIdoc_bin.add(multiNLP(textPDF))
    
    if(counter == 10):  #For testing purposes only
        break


CreatedModelPath = r'C:\[Redacted]\Results' + datetime.strftime(datetime.now(),"%Y%m%d%H%M%S") 
os.mkdir(CreatedModelPath)

FRdoc_bin.to_disk(CreatedModelPath+r'\FRdocBin'+'.nlp')
FRdoc_bin.vocab.to_disk(CreatedModelPath+r'\FRdocBin'+'.voc')


ENdoc_bin.to_disk(CreatedModelPath+r'\ENdocBin'+'.nlp')
DEdoc_bin.to_disk(CreatedModelPath+r'\DEdocBin'+'.nlp')
MULTIdoc_bin.to_disk(CreatedModelPath+'\MULTIdocBin'+'.nlp')

错误我得到:

Traceback (most recent call last):

  File "C:\[Redacted]\ProcessingEngine.py", line 117, in <module>
    FRdoc_bin.vocab.to_disk(CreatedModelPath+r'\FRdocBin'+'.voc')

AttributeError: 'DocBin' object has no attribute 'vocab'

Long shot of getting an answer but I tried your code and it doesn't work for DocBins. I pasted my code below for the import part

import spacy
from spacy.tokens import DocBin
from LanguageIdentifier import predict

import fitz
import glob
import os

from datetime import datetime

import logging

#English-Accuracy: en_core_web_trf
#French-Accuracy: fr_dep_news_trf
#German-Accuracy: de_dep_news_trf
#Multi Language-Accuracy: xx_sent_ud_sm

#DocBins
FRdoc_bin = DocBin (store_user_data=True,attrs=['ENT_TYPE','LEMMA','LIKE_EMAIL','LIKE_URL','LIKE_NUM','ORTH','POS'])
ENdoc_bin = DocBin (store_user_data=True,attrs=['ENT_TYPE','LEMMA','LIKE_EMAIL','LIKE_URL','LIKE_NUM','ORTH','POS'])
DEdoc_bin = DocBin (store_user_data=True,attrs=['ENT_TYPE','LEMMA','LIKE_EMAIL','LIKE_URL','LIKE_NUM','ORTH','POS'])
MULTIdoc_bin = DocBin (store_user_data=True,attrs=['ENT_TYPE','LEMMA','LIKE_EMAIL','LIKE_URL','LIKE_NUM','ORTH','POS'])


#NLP modules
frNLP = spacy.load('fr_dep_news_trf')
enNLP = spacy.load('en_core_web_trf') 
deNLP = spacy.load('de_dep_news_trf')
multiNLP = spacy.load('xx_sent_ud_sm')

ErroredFiles =[]

def processNLP(text):
    
    lang = predict(text) 
    if 'fr' in lang:
        doc = frNLP(text)
        FRdoc_bin.add(doc)
        return
    elif 'de' in lang:
        DEdoc_bin.add(deNLP(text))
        return
    elif 'en' in lang:        
        ENdoc_bin.add(enNLP(text))
        return
    else:
        MULTIdoc_bin.add(multiNLP(text))
        return


def get_text_from_pdf(Path):
    text = ''
    content = fitz.open(Path)
    for page in content:
        if page.number == 1:
            text = page.get_text()[212:]
        else:
            text = text + page.get_text()
    return text


FolderPath = r'C:\[Redacted]\DataSource\*\*.pdf'
PDFfiles = glob.glob(FolderPath)
counter = 0

for file in PDFfiles:    
    counter = counter +1
    try:
        textPDF = get_text_from_pdf(file)
        processNLP(textPDF)
        
    except Exception as e:        
        ErroredFiles.append(file)
        logging.error('Error with file '+ file)
        logging.error('Error message: '+ str(e))
        MULTIdoc_bin.add(multiNLP(textPDF))
    
    if(counter == 10):  #For testing purposes only
        break


CreatedModelPath = r'C:\[Redacted]\Results' + datetime.strftime(datetime.now(),"%Y%m%d%H%M%S") 
os.mkdir(CreatedModelPath)

FRdoc_bin.to_disk(CreatedModelPath+r'\FRdocBin'+'.nlp')
FRdoc_bin.vocab.to_disk(CreatedModelPath+r'\FRdocBin'+'.voc')


ENdoc_bin.to_disk(CreatedModelPath+r'\ENdocBin'+'.nlp')
DEdoc_bin.to_disk(CreatedModelPath+r'\DEdocBin'+'.nlp')
MULTIdoc_bin.to_disk(CreatedModelPath+'\MULTIdocBin'+'.nlp')

Error I get:

Traceback (most recent call last):

  File "C:\[Redacted]\ProcessingEngine.py", line 117, in <module>
    FRdoc_bin.vocab.to_disk(CreatedModelPath+r'\FRdocBin'+'.voc')

AttributeError: 'DocBin' object has no attribute 'vocab'
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文