在本地运行顶点AI模型

发布于 2025-01-20 06:17:38 字数 772 浏览 4 评论 0原文

在GCP培训中使用顶点AI产品非常容易,我上传了一个数据集,它返回了一个保存在GCP存储桶中的型号,我下载了文件,这棵树有这些文件,

├── environment.json
├── feature_attributions.yaml
├── final_model_structure.pb
├── instance.yaml
├── predict
│   └── 001
│       ├── assets
│       │   └── PVC_vocab
│       ├── assets.extra
│       │   └── tf_serving_warmup_requests
│       ├── saved_model.pb
│       └── variables
│           ├── variables.data-00000-of-00001
│           └── variables.index
├── prediction_schema.yaml
├── tables_server_metadata.pb
└── transformations.pb

我想从一个本地使用此模型。 Dockerized Python应用程序,但我不知道TF可以做到这一点,我对哪个.pb文件感到非常困惑,是我需要的神经网络的实际文件。

感谢您的任何提示。

Using the Vertex AI product at GCP training was very easy, I uploaded a data set and it returned a model which is saved in a gcp bucket, I downloaded the files and the tree has these files

├── environment.json
├── feature_attributions.yaml
├── final_model_structure.pb
├── instance.yaml
├── predict
│   └── 001
│       ├── assets
│       │   └── PVC_vocab
│       ├── assets.extra
│       │   └── tf_serving_warmup_requests
│       ├── saved_model.pb
│       └── variables
│           ├── variables.data-00000-of-00001
│           └── variables.index
├── prediction_schema.yaml
├── tables_server_metadata.pb
└── transformations.pb

I would like to serve this model locally from a dockerized python application, but I don't know enough TF to do this and I am very confused about which .pb file is the actual one that has the neural network I need.

Thanks for any tips.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

乄_柒ぐ汐 2025-01-27 06:17:38

您似乎已成功使用 Google Cloud 的 Vertex AI 服务训练机器学习模型,并且您从 GCP 存储桶下载了模型工件。支持模型所需的文件是已保存模型工件的一部分。

您必须将存储的模型加载到内存中并利用服务库(例如 Tensor Flow Serving)生成预测,以便从对接的 Python 应用程序在本地为您的模型提供服务。

使用 Tensor Flow 的 SavedModel API,将存储的模型加载到内存中,如下所示:

model = tf.saved_model.load
signature = list(model.signatures.keys())[0]
input_tensor_name = model.signatures[signature].inputs.keys()[0]
predict_fn = model.signatures[signature]
input_data = {'your_input_key': your_input_value}
output = predict_fn(**input_data)

将前面代码中的 /path/to/your/saved/model 替换为包含已保存模型工件的目录的路径。

位于predict/001/目录中的保存的model.pb文件包含真实的神经网络。

将保存的模型加载到内存后,可以使用 TensorFlow Serving 或 TensorFlow Model Server 等服务库通过网络为您的模型提供服务。

我希望这有用。

It appears that you were successful in training a machine learning model using the Vertex AI service from Google Cloud, and that you downloaded the model artefacts from your GCP bucket. The files required to support your model are part of the saved model artefacts.

You must load the stored model into memory and utilize a serving library, such as Tensor Flow Serving, to generate predictions in order to serve your model locally from a dockized Python application.

Using Tensor Flow's SavedModel API, load the stored model into memory as follows:

model = tf.saved_model.load
signature = list(model.signatures.keys())[0]
input_tensor_name = model.signatures[signature].inputs.keys()[0]
predict_fn = model.signatures[signature]
input_data = {'your_input_key': your_input_value}
output = predict_fn(**input_data)

Replace /path/to/your/saved/model in the preceding code with the path to the directory containing the saved model artefacts.

The saved model.pb file located in the predict/001/ directory contains the real neural network.

A serving library like TensorFlow Serving or TensorFlow Model Server can be used to serve your model over a network once you have loaded the saved model into memory.

I hope this is useful.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文