如何使用 Node.js 客户端库对私有 Vertex AI 端点进行预测?

发布于 2025-01-14 09:20:50 字数 2546 浏览 4 评论 0 原文

关于此的文档在发布时有点模糊 https://cloud.google.com/vertex-ai/docs/predictions/using-private-endpoints#sending-prediction-to-private-endpoint ,他们只提到如何用卷曲做到这一点。

如果可能的话,我想使用node.js客户端库,但我只设法找到不使用私有端点的示例,即: https://github.com/googleapis/nodejs-ai-platform/blob/main/samples/predict-custom-trained-model.js

我已通读从 @google-cloud/aiplatform 导入的 PredictionServiceClient 的类型定义,但没有找到插入我的专用端点的方法。我已经尝试通过简单地通过执行 const endpoint =projects/${project}/locations/${location}/endpoints/${endpointId} 来指定资源名称来发出请求,但这会导致以下错误:

Error: 13 INTERNAL: Received RST_STREAM with code 0
    at Object.callErrorFromStatus (/home/vitor/vertexai/node_modules/@grpc/grpc-js/src/call.ts:81:24)
    at Object.onReceiveStatus (/home/vitor/vertexai/node_modules/@grpc/grpc-js/src/client.ts:343:36)
    at Object.onReceiveStatus (/home/vitor/vertexai/node_modules/@grpc/grpc-js/src/client-interceptors.ts:462:34)
    at Object.onReceiveStatus (/home/vitor/vertexai/node_modules/@grpc/grpc-js/src/client-interceptors.ts:424:48)
    at /home/vitor/vertexai/node_modules/@grpc/grpc-js/src/call-stream.ts:323:24
    at processTicksAndRejections (node:internal/process/task_queues:78:11) {
  code: 13,
  details: 'Received RST_STREAM with code 0',
  metadata: Metadata { internalRepr: Map(0) {}, options: {} }
}

我的代码如下所示:

(async () => {
        const client = new v1beta1.PredictionServiceClient();
        const location = "****";
        const project = "****";
        const endpointId = "****"
        const endpoint = `projects/${project}/locations/${location}/endpoints/${endpointId}`;

        const parameters = {
            structValue: {
                fields: {},
            },
        };

        const toInstance = (obj: any) => (
            {
                structValue: {
                    fields: {
                        ****
                    }
                }
            });

        const instance = toInstance(****);
        const instances = [instance];

        const res = await client.predict({
            instances,
            endpoint,
            parameters
        });
        console.log(res);
    })();

是否可以向 atm 提出这种请求?

Documentation on this is a bit vague at the time of posting https://cloud.google.com/vertex-ai/docs/predictions/using-private-endpoints#sending-prediction-to-private-endpoint , they only mention how to do it with curl.

I would like to use the node.js client library if possible, but I've only managed to find examples that don't use a private endpoint ie: https://github.com/googleapis/nodejs-ai-platform/blob/main/samples/predict-custom-trained-model.js .

I've read through the type definitions of PredictionServiceClient imported from @google-cloud/aiplatform and didn't find a way to plug in my private endpoint. I've tried making the request anyway by simply specifying the resource name by doing const endpoint = projects/${project}/locations/${location}/endpoints/${endpointId} but this leads to the following error:

Error: 13 INTERNAL: Received RST_STREAM with code 0
    at Object.callErrorFromStatus (/home/vitor/vertexai/node_modules/@grpc/grpc-js/src/call.ts:81:24)
    at Object.onReceiveStatus (/home/vitor/vertexai/node_modules/@grpc/grpc-js/src/client.ts:343:36)
    at Object.onReceiveStatus (/home/vitor/vertexai/node_modules/@grpc/grpc-js/src/client-interceptors.ts:462:34)
    at Object.onReceiveStatus (/home/vitor/vertexai/node_modules/@grpc/grpc-js/src/client-interceptors.ts:424:48)
    at /home/vitor/vertexai/node_modules/@grpc/grpc-js/src/call-stream.ts:323:24
    at processTicksAndRejections (node:internal/process/task_queues:78:11) {
  code: 13,
  details: 'Received RST_STREAM with code 0',
  metadata: Metadata { internalRepr: Map(0) {}, options: {} }
}

My code looks like this:

(async () => {
        const client = new v1beta1.PredictionServiceClient();
        const location = "****";
        const project = "****";
        const endpointId = "****"
        const endpoint = `projects/${project}/locations/${location}/endpoints/${endpointId}`;

        const parameters = {
            structValue: {
                fields: {},
            },
        };

        const toInstance = (obj: any) => (
            {
                structValue: {
                    fields: {
                        ****
                    }
                }
            });

        const instance = toInstance(****);
        const instances = [instance];

        const res = await client.predict({
            instances,
            endpoint,
            parameters
        });
        console.log(res);
    })();

Is it possible to make this kind of request atm?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

哆啦不做梦 2025-01-21 09:20:50

我必须使用以下命令初始化客户端,才能使其按照文档的方式运行。

const client = new PredictionServiceClient({
apiEndpoint: 'us-central1-aiplatform.googleapis.com'

});

I had to initialize the client using the following in order to get it to behave as documented.

const client = new PredictionServiceClient({
apiEndpoint: 'us-central1-aiplatform.googleapis.com'

});

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文