在本地运行顶点AI模型
在GCP培训中使用顶点AI产品非常容易,我上传了一个数据集,它返回了一个保存在GCP存储桶中的型号,我下载了文件,这棵树有这些文件,
├── environment.json
├── feature_attributions.yaml
├── final_model_structure.pb
├── instance.yaml
├── predict
│ └── 001
│ ├── assets
│ │ └── PVC_vocab
│ ├── assets.extra
│ │ └── tf_serving_warmup_requests
│ ├── saved_model.pb
│ └── variables
│ ├── variables.data-00000-of-00001
│ └── variables.index
├── prediction_schema.yaml
├── tables_server_metadata.pb
└── transformations.pb
我想从一个本地使用此模型。 Dockerized Python应用程序,但我不知道TF可以做到这一点,我对哪个.pb
文件感到非常困惑,是我需要的神经网络的实际文件。
感谢您的任何提示。
Using the Vertex AI product at GCP training was very easy, I uploaded a data set and it returned a model which is saved in a gcp bucket, I downloaded the files and the tree has these files
├── environment.json
├── feature_attributions.yaml
├── final_model_structure.pb
├── instance.yaml
├── predict
│ └── 001
│ ├── assets
│ │ └── PVC_vocab
│ ├── assets.extra
│ │ └── tf_serving_warmup_requests
│ ├── saved_model.pb
│ └── variables
│ ├── variables.data-00000-of-00001
│ └── variables.index
├── prediction_schema.yaml
├── tables_server_metadata.pb
└── transformations.pb
I would like to serve this model locally from a dockerized python application, but I don't know enough TF to do this and I am very confused about which .pb
file is the actual one that has the neural network I need.
Thanks for any tips.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您似乎已成功使用 Google Cloud 的 Vertex AI 服务训练机器学习模型,并且您从 GCP 存储桶下载了模型工件。支持模型所需的文件是已保存模型工件的一部分。
您必须将存储的模型加载到内存中并利用服务库(例如 Tensor Flow Serving)生成预测,以便从对接的 Python 应用程序在本地为您的模型提供服务。
使用 Tensor Flow 的 SavedModel API,将存储的模型加载到内存中,如下所示:
将前面代码中的 /path/to/your/saved/model 替换为包含已保存模型工件的目录的路径。
位于predict/001/目录中的保存的model.pb文件包含真实的神经网络。
将保存的模型加载到内存后,可以使用 TensorFlow Serving 或 TensorFlow Model Server 等服务库通过网络为您的模型提供服务。
我希望这有用。
It appears that you were successful in training a machine learning model using the Vertex AI service from Google Cloud, and that you downloaded the model artefacts from your GCP bucket. The files required to support your model are part of the saved model artefacts.
You must load the stored model into memory and utilize a serving library, such as Tensor Flow Serving, to generate predictions in order to serve your model locally from a dockized Python application.
Using Tensor Flow's SavedModel API, load the stored model into memory as follows:
Replace /path/to/your/saved/model in the preceding code with the path to the directory containing the saved model artefacts.
The saved model.pb file located in the predict/001/ directory contains the real neural network.
A serving library like TensorFlow Serving or TensorFlow Model Server can be used to serve your model over a network once you have loaded the saved model into memory.
I hope this is useful.