将自定义训练的 Keras 模型与 Sagemaker 端点结合使用会导致 ModelError:调用 InvokeEndpoint 操作时发生错误 (ModelError):
我试图通过在 sagemaker 中加载预先训练的模型来进行预测,但出现以下错误
ModelError:调用时发生错误(ModelError) InvokeEndpoint 操作:从主服务器收到客户端错误 (400) 消息 "{ "error": "之前未使用图表创建会话 运行()!” }
我的代码
def convert_h5_to_aws(loaded_model):
import tensorflow as tf
if tf.executing_eagerly():
tf.compat.v1.disable_eager_execution()
"""
given a pre-trained keras model, this function converts it to a TF protobuf format
and saves it in the file structure which aws expects
"""
from tensorflow.python.saved_model import builder
from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
from tensorflow.python.saved_model import tag_constants
# This is the file structure which AWS expects. Cannot be changed.
model_version = '1'
export_dir = 'export/Servo/' + model_version
# Build the Protocol Buffer SavedModel at 'export_dir'
builder = builder.SavedModelBuilder(export_dir)
# Create prediction signature to be used by TensorFlow Serving Predict API
signature = predict_signature_def(
inputs={"inputs": loaded_model.input}, outputs={"score": loaded_model.output})
from keras import backend as K
with K.get_session() as sess:
# Save the meta graph and variables
builder.add_meta_graph_and_variables(
sess=sess, tags=[tag_constants.SERVING], signature_def_map={"serving_default": signature})
builder.save()
#create a tarball/tar file and zip it
import tarfile
with tarfile.open('model.tar.gz', mode='w:gz') as archive:
archive.add('export', recursive=True)
convert_h5_to_aws(model)
import sagemaker
sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz', key_prefix='model')
!touch train.py #create an empty python file
import boto3, re
from sagemaker import get_execution_role
# the (default) IAM role you created when creating this notebook
role = get_execution_role()
import boto3, re
from sagemaker import get_execution_role
# the (default) IAM role you created when creating this notebook
role = get_execution_role()
# Create a Sagemaker model (see AWS console>SageMaker>Models)
from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(model_data = 's3://' + sagemaker_session.default_bucket() + '/model/model.tar.gz',
role = role,
framework_version = '1.12',
entry_point = 'train.py')
# Deploy a SageMaker to an endpoint
predictor = sagemaker_model.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
# Create a predictor which uses this new endpoint
import sagemaker
from sagemaker.tensorflow.model import TensorFlowModel
#endpoint = '' #get endpoint name from SageMaker > endpoints
predictor=sagemaker.tensorflow.model.TensorFlowPredictor(endpoint, sagemaker_session)
# .predict send the data to our endpoint
data = X_test #<-- update this to have inputs for your model
predictor.predict(data)
我也尝试使用不同版本的 TensorFlowModel
I am trying to predict by loading pre-trained model in sagemaker, but I am getting the below error
ModelError: An error occurred (ModelError) when calling the
InvokeEndpoint operation: Received client error (400) from primary
with message "{ "error": "Session was not created with a graph before
Run()!" }
My code
def convert_h5_to_aws(loaded_model):
import tensorflow as tf
if tf.executing_eagerly():
tf.compat.v1.disable_eager_execution()
"""
given a pre-trained keras model, this function converts it to a TF protobuf format
and saves it in the file structure which aws expects
"""
from tensorflow.python.saved_model import builder
from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
from tensorflow.python.saved_model import tag_constants
# This is the file structure which AWS expects. Cannot be changed.
model_version = '1'
export_dir = 'export/Servo/' + model_version
# Build the Protocol Buffer SavedModel at 'export_dir'
builder = builder.SavedModelBuilder(export_dir)
# Create prediction signature to be used by TensorFlow Serving Predict API
signature = predict_signature_def(
inputs={"inputs": loaded_model.input}, outputs={"score": loaded_model.output})
from keras import backend as K
with K.get_session() as sess:
# Save the meta graph and variables
builder.add_meta_graph_and_variables(
sess=sess, tags=[tag_constants.SERVING], signature_def_map={"serving_default": signature})
builder.save()
#create a tarball/tar file and zip it
import tarfile
with tarfile.open('model.tar.gz', mode='w:gz') as archive:
archive.add('export', recursive=True)
convert_h5_to_aws(model)
import sagemaker
sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz', key_prefix='model')
!touch train.py #create an empty python file
import boto3, re
from sagemaker import get_execution_role
# the (default) IAM role you created when creating this notebook
role = get_execution_role()
import boto3, re
from sagemaker import get_execution_role
# the (default) IAM role you created when creating this notebook
role = get_execution_role()
# Create a Sagemaker model (see AWS console>SageMaker>Models)
from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(model_data = 's3://' + sagemaker_session.default_bucket() + '/model/model.tar.gz',
role = role,
framework_version = '1.12',
entry_point = 'train.py')
# Deploy a SageMaker to an endpoint
predictor = sagemaker_model.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
# Create a predictor which uses this new endpoint
import sagemaker
from sagemaker.tensorflow.model import TensorFlowModel
#endpoint = '' #get endpoint name from SageMaker > endpoints
predictor=sagemaker.tensorflow.model.TensorFlowPredictor(endpoint, sagemaker_session)
# .predict send the data to our endpoint
data = X_test #<-- update this to have inputs for your model
predictor.predict(data)
I also tried using different versions of TensorFlowModel
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
所有这些代码都在笔记本中吗?您需要确保正确保存模型工件和推理代码。确保正确存储已保存模型的元数据,并且如果您有带有推理函数的推理脚本(处理预处理和后处理),则该脚本也应该与 tar 文件中的脚本一起包装在代码目录中。以下是在 SageMaker 上部署预训练的 Sklearn 模型的示例,您可以对预训练的 TensorFlow 模型执行相同的操作。
Sklearn 预训练示例:https://github.com/RamVegiraju/Pre-Trained- Sklearn-SageMaker
Is all of this code in a notebook? You want to make sure you are properly tarring your model artifacts and inference code. Make sure that you have your metadata for your saved model stored properly and also if you have an inference script with inference functions (handling pre and post processing) this should be wrapped in a code directory with the script in the tar file as well. Here's an example of deploying a pre-trained Sklearn model on SageMaker you can do the same with your pre-trained TensorFlow model.
Sklearn pre-trained example: https://github.com/RamVegiraju/Pre-Trained-Sklearn-SageMaker