属性错误:' tokenizer'对象没有属性'分析仪'
def generate_desc(model, tokenizer, photo, max_length):
# seed the generation process
in_text = 'startseq'
# iterate over the whole length of the sequence
for i in range(max_length):
# integer encode input sequence
print('seqqqqq')
sequence = tokenizer.texts_to_sequences([in_text])[0]
print('seqqq done')
# pad input
sequence = pad_sequences([sequence], maxlen=max_length)
print('pad seqqqq')
# predict next word
yhat = model.predict([photo, sequence], verbose=0)
# convert probability to integer
yhat = argmax(yhat)
# map integer to word
word = word_for_id(yhat, tokenizer)
# stop if we cannot map the word
if word is None:
break
# append as input for generating the next word
in_text += ' ' + word
# stop if we predict the end of the sequence
if word == 'endseq':
break
return in_text
我对这条线有问题。
sequence = tokenizer.texts_to_sepences([[in_text])[0]
当我调用此函数时,我得到了:
in texts_to_sequences
return list(self.texts_to_sequences_generator(texts))
in texts_to_sequences_generator if self.analyzer is None:
AttributeError: 'Tokenizer' object has no attribute 'analyzer'
tokenizer is pickle pickle file i i i打开并加载
def generate_desc(model, tokenizer, photo, max_length):
# seed the generation process
in_text = 'startseq'
# iterate over the whole length of the sequence
for i in range(max_length):
# integer encode input sequence
print('seqqqqq')
sequence = tokenizer.texts_to_sequences([in_text])[0]
print('seqqq done')
# pad input
sequence = pad_sequences([sequence], maxlen=max_length)
print('pad seqqqq')
# predict next word
yhat = model.predict([photo, sequence], verbose=0)
# convert probability to integer
yhat = argmax(yhat)
# map integer to word
word = word_for_id(yhat, tokenizer)
# stop if we cannot map the word
if word is None:
break
# append as input for generating the next word
in_text += ' ' + word
# stop if we predict the end of the sequence
if word == 'endseq':
break
return in_text
I have problem with this line.
sequence = tokenizer.texts_to_sequences([in_text])[0]
when I call this function I get this:
in texts_to_sequences
return list(self.texts_to_sequences_generator(texts))
in texts_to_sequences_generator if self.analyzer is None:
AttributeError: 'Tokenizer' object has no attribute 'analyzer'
Note: tokenizer is pickle file I opened and loaded
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论