“属性错误:”str“对象没有属性“解码”; ”加载 Keras 保存的模型时

发布于 2025-01-13 19:16:38 字数 4359 浏览 2 评论 0原文

训练后,我保存了 Keras 整个模型,并且仅使用

model.save_weights(MODEL_WEIGHTS) and model.save(MODEL_NAME)

模型和权重保存了成功,并且没有错误。 我可以简单地使用 model.load_weights 成功加载权重,并且它们很好用,但是当我尝试通过 load_model 加载保存模型时,我收到错误。

File "C:/Users/Rizwan/model_testing/model_performance.py", line 46, in <module>
Model2 = load_model('nasnet_RS2.h5',custom_objects={'euc_dist_keras': euc_dist_keras})
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 321, in _deserialize_model
optimizer_weights_group['weight_names']]
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 320, in <listcomp>
n.decode('utf8') for n in
AttributeError: 'str' object has no attribute 'decode'

我从未收到此错误,并且我曾经成功加载任何模型。我正在使用带有张量流后端的 Keras 2.2.4。 Python 3.6。 我的培训代码是:

from keras_preprocessing.image import ImageDataGenerator
from keras import backend as K
from keras.models import load_model
from keras.callbacks import ReduceLROnPlateau, TensorBoard, 
ModelCheckpoint,EarlyStopping
import pandas as pd

MODEL_NAME = "nasnet_RS2.h5"
MODEL_WEIGHTS = "nasnet_RS2_weights.h5"
def euc_dist_keras(y_true, y_pred):
return K.sqrt(K.sum(K.square(y_true - y_pred), axis=-1, keepdims=True))
def main():

# Here, we initialize the "NASNetMobile" model type and customize the final 
#feature regressor layer.
# NASNet is a neural network architecture developed by Google.
# This architecture is specialized for transfer learning, and was discovered via Neural Architecture Search.
# NASNetMobile is a smaller version of NASNet.
model = NASNetMobile()
model = Model(model.input, Dense(1, activation='linear', kernel_initializer='normal')(model.layers[-2].output))

#    model = load_model('current_best.hdf5', custom_objects={'euc_dist_keras': euc_dist_keras})

# This model will use the "Adam" optimizer.
model.compile("adam", euc_dist_keras)
lr_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.003)
# This callback will log model stats to Tensorboard.
tb_callback = TensorBoard()
# This callback will checkpoint the best model at every epoch.
mc_callback = ModelCheckpoint(filepath='current_best_mem3.h5', verbose=1, save_best_only=True)
es_callback=EarlyStopping(monitor='val_loss', min_delta=0, patience=4, verbose=0, mode='auto', baseline=None, restore_best_weights=True)

# This is the train DataSequence.
# These are the callbacks.
#callbacks = [lr_callback, tb_callback,mc_callback]
callbacks = [lr_callback, tb_callback,es_callback]

train_pd = pd.read_csv("./train3.txt", delimiter=" ", names=["id", "label"], index_col=None)
test_pd = pd.read_csv("./val3.txt", delimiter=" ", names=["id", "label"], index_col=None)

 #    train_pd = pd.read_csv("./train2.txt",delimiter=" ",header=None,index_col=None)
 #    test_pd = pd.read_csv("./val2.txt",delimiter=" ",header=None,index_col=None)
#model.summary()
batch_size=32
datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = datagen.flow_from_dataframe(dataframe=train_pd, 
directory="./images", x_col="id", y_col="label",
                                              has_ext=True, 
class_mode="other", target_size=(224, 224),
                                              batch_size=batch_size)
valid_generator = datagen.flow_from_dataframe(dataframe=test_pd, directory="./images", x_col="id", y_col="label",
                                              has_ext=True, class_mode="other", target_size=(224, 224),
                                              batch_size=batch_size)

STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n // valid_generator.batch_size
model.fit_generator(generator=train_generator,
                    steps_per_epoch=STEP_SIZE_TRAIN,
                    validation_data=valid_generator,
                    validation_steps=STEP_SIZE_VALID,
                    callbacks=callbacks,
                    epochs=20)

# we save the model.
model.save_weights(MODEL_WEIGHTS)
model.save(MODEL_NAME)
if __name__ == '__main__':
   # freeze_support() here if program needs to be frozen
    main()

After Training, I saved Both Keras whole Model and Only Weights using

model.save_weights(MODEL_WEIGHTS) and model.save(MODEL_NAME)

Models and Weights were saved successfully and there was no error.
I can successfully load the weights simply using model.load_weights and they are good to go, but when i try to load the save model via load_model, i am getting an error.

File "C:/Users/Rizwan/model_testing/model_performance.py", line 46, in <module>
Model2 = load_model('nasnet_RS2.h5',custom_objects={'euc_dist_keras': euc_dist_keras})
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 321, in _deserialize_model
optimizer_weights_group['weight_names']]
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 320, in <listcomp>
n.decode('utf8') for n in
AttributeError: 'str' object has no attribute 'decode'

I never received this error and i used to load any models successfully. I am using Keras 2.2.4 with tensorflow backend. Python 3.6.
My Code for training is :

from keras_preprocessing.image import ImageDataGenerator
from keras import backend as K
from keras.models import load_model
from keras.callbacks import ReduceLROnPlateau, TensorBoard, 
ModelCheckpoint,EarlyStopping
import pandas as pd

MODEL_NAME = "nasnet_RS2.h5"
MODEL_WEIGHTS = "nasnet_RS2_weights.h5"
def euc_dist_keras(y_true, y_pred):
return K.sqrt(K.sum(K.square(y_true - y_pred), axis=-1, keepdims=True))
def main():

# Here, we initialize the "NASNetMobile" model type and customize the final 
#feature regressor layer.
# NASNet is a neural network architecture developed by Google.
# This architecture is specialized for transfer learning, and was discovered via Neural Architecture Search.
# NASNetMobile is a smaller version of NASNet.
model = NASNetMobile()
model = Model(model.input, Dense(1, activation='linear', kernel_initializer='normal')(model.layers[-2].output))

#    model = load_model('current_best.hdf5', custom_objects={'euc_dist_keras': euc_dist_keras})

# This model will use the "Adam" optimizer.
model.compile("adam", euc_dist_keras)
lr_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.003)
# This callback will log model stats to Tensorboard.
tb_callback = TensorBoard()
# This callback will checkpoint the best model at every epoch.
mc_callback = ModelCheckpoint(filepath='current_best_mem3.h5', verbose=1, save_best_only=True)
es_callback=EarlyStopping(monitor='val_loss', min_delta=0, patience=4, verbose=0, mode='auto', baseline=None, restore_best_weights=True)

# This is the train DataSequence.
# These are the callbacks.
#callbacks = [lr_callback, tb_callback,mc_callback]
callbacks = [lr_callback, tb_callback,es_callback]

train_pd = pd.read_csv("./train3.txt", delimiter=" ", names=["id", "label"], index_col=None)
test_pd = pd.read_csv("./val3.txt", delimiter=" ", names=["id", "label"], index_col=None)

 #    train_pd = pd.read_csv("./train2.txt",delimiter=" ",header=None,index_col=None)
 #    test_pd = pd.read_csv("./val2.txt",delimiter=" ",header=None,index_col=None)
#model.summary()
batch_size=32
datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = datagen.flow_from_dataframe(dataframe=train_pd, 
directory="./images", x_col="id", y_col="label",
                                              has_ext=True, 
class_mode="other", target_size=(224, 224),
                                              batch_size=batch_size)
valid_generator = datagen.flow_from_dataframe(dataframe=test_pd, directory="./images", x_col="id", y_col="label",
                                              has_ext=True, class_mode="other", target_size=(224, 224),
                                              batch_size=batch_size)

STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n // valid_generator.batch_size
model.fit_generator(generator=train_generator,
                    steps_per_epoch=STEP_SIZE_TRAIN,
                    validation_data=valid_generator,
                    validation_steps=STEP_SIZE_VALID,
                    callbacks=callbacks,
                    epochs=20)

# we save the model.
model.save_weights(MODEL_WEIGHTS)
model.save(MODEL_NAME)
if __name__ == '__main__':
   # freeze_support() here if program needs to be frozen
    main()

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(10

圈圈圆圆圈圈 2025-01-20 19:16:38

对我来说,解决方案是将 h5py 包降级(在我的例子中降级为 2.10.0),显然仅将 Keras 和 Tensorflow 恢复到正确的版本是不够的。

For me the solution was downgrading the h5py package (in my case to 2.10.0), apparently putting back only Keras and Tensorflow to the correct versions was not enough.

陌若浮生 2025-01-20 19:16:38

我使用以下命令降级了我的 h5py 软件包,

pip install 'h5py==2.10.0' --force-reinstall

重新启动了我的 ipython 内核并且它工作了。

I downgraded my h5py package with the following command,

pip install 'h5py==2.10.0' --force-reinstall

Restarted my ipython kernel and it worked.

青春有你 2025-01-20 19:16:38

对我来说,这是优于我之前版本的 h5py 版本。
通过设置为 2.10.0 修复了该问题。

For me it was the version of h5py that was superior to my previous build.
Fixed it by setting to 2.10.0.

一抹淡然 2025-01-20 19:16:38

使用以下命令降级 h5py 包来解决问题,

pip install h5py==2.10.0 --force-reinstall

Downgrade h5py package with the following command to resolve the issue,

pip install h5py==2.10.0 --force-reinstall
烟若柳尘 2025-01-20 19:16:38

使用 TF 格式文件而不是 h5py 保存:save_format='tf'。就我而言:

model.save_weights("NMT_model_weight.tf",save_format='tf')

saved using TF format file and not h5py: save_format='tf'. In my case:

model.save_weights("NMT_model_weight.tf",save_format='tf')
诺曦 2025-01-20 19:16:38

我遇到了同样的问题,解决了将 compile=False 放入 load_model 中:

model_ = load_model('path to your model.h5',custom_objects={'Scale': Scale()}, compile=False)
sgd = SGD(lr=1e-3, decay=1e-6, momentum=0.9, nesterov=True)
model_.compile(loss='categorical_crossentropy',optimizer='sgd',metrics=['accuracy'])

I had the same problem, solved putting compile=False in load_model:

model_ = load_model('path to your model.h5',custom_objects={'Scale': Scale()}, compile=False)
sgd = SGD(lr=1e-3, decay=1e-6, momentum=0.9, nesterov=True)
model_.compile(loss='categorical_crossentropy',optimizer='sgd',metrics=['accuracy'])
皇甫轩 2025-01-20 19:16:38

这可能是由于从不同版本的 keras 保存的模型造成的。从 keras 2.2.6 加载由tensorflow.keras(我认为与 tf 1.12 的 keras 2.1.6 类似)生成的模型时,我遇到了同样的问题。

您可以使用 model.load_weights 加载权重,并从您要使用的 keras 版本重新保存完整模型。

This is probably due to a model saved from a different version of keras. I got the same problem when loading a model generated by tensorflow.keras (which is similar to keras 2.1.6 for tf 1.12 I think) from keras 2.2.6.

You can load the weights with model.load_weights and resave the complete model from the keras version you want to use.

陈年往事 2025-01-20 19:16:38

对我有用的解决方案是:

pip3 uninstall keras
pip3 uninstall tensorflow
pip3 install --upgrade pip3
pip3 install tensorflow
pip3 install keras

The solution than works for me was:

pip3 uninstall keras
pip3 uninstall tensorflow
pip3 install --upgrade pip3
pip3 install tensorflow
pip3 install keras
新雨望断虹 2025-01-20 19:16:38

在我的环境中使用tensorflow==2.4.1、h5py==2.1.0和python 3.8后,我仍然遇到此错误。
解决这个问题的方法是将 python 版本降级到 3.6.9

I still kept having this error after having tensorflow==2.4.1, h5py==2.1.0, and python 3.8 in my environment.
what fixed it was downgrading the python version to 3.6.9

ペ泪落弦音 2025-01-20 19:16:38

降级 python、tensorflow、keras 和 h5py 解决了该问题。

python -> 3.6.2

pip install tensorflow==1.3.0
pip install keras==2.1.2
pip install 'h5py==2.10.0' --force-reinstall

Downgrading python, tensorflow, keras and h5py resolved the issue.

python -> 3.6.2

pip install tensorflow==1.3.0
pip install keras==2.1.2
pip install 'h5py==2.10.0' --force-reinstall
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文