如何将 *序列化 *张量传递给TensorFlow模型服务服务器?
我实施了一个简单的TF模型。该模型接收了灰色图像的序列化张量(仅仅是2D NDARRAY),并将其恢复为2-D张量。之后,对此2D张量进行了一些推断。
我使用TensorFlow模型服务部署了该模式,并尝试将JSON字符串发送到其余端口,如下所示:
{
"instances": [
{"b64": bin_str},
]
}
我尝试了类似tf.io.serialize_tensor
等的东西,将输入映像转换为串行张量和转换为将其传递给服务器,但全部都失败了。
我想知道如何将串行张量发送到服务服务器。
而且我保存的模型具有以下签名:
signatures = {
"serving_default": _get_serve_tf_examples_fn(
model,
transform_output).get_concrete_function(
# explicitly specify input signature of serve_tf_examples_fn
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name="examples")),
}
以及_get_serve_tf_examples_fn
的定义,
def _get_serve_tf_examples_fn(model: tf.keras.models.Model,
transform_output: tft.TFTransformOutput):
# get the Transform graph from the component
model.tft_layer = transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized: str) -> Dict:
''' Args: serialized: is serialized image tensor.
'''
feature_spec = transform_output.raw_feature_spec()
# remove label spec.
feature_spec.pop("label")
# Deserialize the image tensor.
parsed_features = tf.io.parse_example(
serialized,
feature_spec)
# Preprocess the example using outputs of Transform pipeline.
transformed_features = model.tft_layer(parsed_features)
outputs = model(transformed_features)
return {"outputs": outputs}
return serve_tf_examples_fn
上面的代码段接收到了灰色映像的串行张量(只有2D ndarray),并将其还原为2-D量量表。之后,该模型对此2-D张量进行推断。
我想知道如何将串行张量发送到服务服务器的其余端口。
任何帮助将不胜感激。
I implemented a simple TF model. The model received a serialized tensor of a gray image (simply a 2d ndarray), and restored it to a 2-d tensor. After then, some inference is applied on this 2-d tensor.
I deployed the mode with TensorFlow Model Serving, and tried to send a JSON string to the REST port as follows:
{
"instances": [
{"b64": bin_str},
]
}
I tried something like tf.io.serialize_tensor
etc to convert input image into a serialized tensor and to pass it to the server, but all failed.
I would like to know how to send a serialized tensor to the serving server.
And my saved model has following signature:
signatures = {
"serving_default": _get_serve_tf_examples_fn(
model,
transform_output).get_concrete_function(
# explicitly specify input signature of serve_tf_examples_fn
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name="examples")),
}
and the definition of _get_serve_tf_examples_fn
is,
def _get_serve_tf_examples_fn(model: tf.keras.models.Model,
transform_output: tft.TFTransformOutput):
# get the Transform graph from the component
model.tft_layer = transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized: str) -> Dict:
''' Args: serialized: is serialized image tensor.
'''
feature_spec = transform_output.raw_feature_spec()
# remove label spec.
feature_spec.pop("label")
# Deserialize the image tensor.
parsed_features = tf.io.parse_example(
serialized,
feature_spec)
# Preprocess the example using outputs of Transform pipeline.
transformed_features = model.tft_layer(parsed_features)
outputs = model(transformed_features)
return {"outputs": outputs}
return serve_tf_examples_fn
The above code segment received a serialized tensor of a gray image (simply a 2d ndarray), and restored it to a 2-d tensor. After then, the model is doing inference on this 2-d tensor.
I would like to know how to send a serialized tensor to the REST port of the serving server.
Any help would be appreciated.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论