属性错误:' tokenizer'对象没有属性'分析仪'

发布于 2025-02-12 10:31:16 字数 1440 浏览 2 评论 0原文

def generate_desc(model, tokenizer, photo, max_length):
    # seed the generation process
    in_text = 'startseq'
    # iterate over the whole length of the sequence
    for i in range(max_length):
        # integer encode input sequence
        print('seqqqqq')
        sequence = tokenizer.texts_to_sequences([in_text])[0]
        print('seqqq done')
        # pad input
        sequence = pad_sequences([sequence], maxlen=max_length)
        print('pad seqqqq')
        # predict next word
        yhat = model.predict([photo, sequence], verbose=0)
        # convert probability to integer
        yhat = argmax(yhat)
        # map integer to word
        word = word_for_id(yhat, tokenizer)
        # stop if we cannot map the word
        if word is None:
            break
        # append as input for generating the next word
        in_text += ' ' + word
        # stop if we predict the end of the sequence
        if word == 'endseq':
            break
    return in_text

我对这条线有问题。

sequence = tokenizer.texts_to_sepences([[in_text])[0]

当我调用此函数时,我得到了:

in texts_to_sequences 
return list(self.texts_to_sequences_generator(texts))

in texts_to_sequences_generator if self.analyzer is None: 
AttributeError: 'Tokenizer' object has no attribute 'analyzer'

tokenizer is pickle pickle file i i i打开并加载

def generate_desc(model, tokenizer, photo, max_length):
    # seed the generation process
    in_text = 'startseq'
    # iterate over the whole length of the sequence
    for i in range(max_length):
        # integer encode input sequence
        print('seqqqqq')
        sequence = tokenizer.texts_to_sequences([in_text])[0]
        print('seqqq done')
        # pad input
        sequence = pad_sequences([sequence], maxlen=max_length)
        print('pad seqqqq')
        # predict next word
        yhat = model.predict([photo, sequence], verbose=0)
        # convert probability to integer
        yhat = argmax(yhat)
        # map integer to word
        word = word_for_id(yhat, tokenizer)
        # stop if we cannot map the word
        if word is None:
            break
        # append as input for generating the next word
        in_text += ' ' + word
        # stop if we predict the end of the sequence
        if word == 'endseq':
            break
    return in_text

I have problem with this line.

sequence = tokenizer.texts_to_sequences([in_text])[0]

when I call this function I get this:

in texts_to_sequences 
return list(self.texts_to_sequences_generator(texts))

in texts_to_sequences_generator if self.analyzer is None: 
AttributeError: 'Tokenizer' object has no attribute 'analyzer'

Note: tokenizer is pickle file I opened and loaded

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文