400个文档页面超过限制:“ page_limit_exceeded”;

发布于 2025-01-21 16:49:02 字数 2017 浏览 0 评论 0 原文

The DocumentProcessorServiceAsyncClient.process_document 方法正在使用以下错误消息进行错误: 400文档页面超过限制:“ page_limit_exceeded” 。根据 api文档最多处理200页。通过使用而不是,我假设我将能够利用异步最大页面限制。事实并非如此。

我正在测试的示例代码:

api_path = f'projects/{project_id}/locations/{gcloud_region}/processors/{processor_id}'
documentai_client = documentai.DocumentProcessorServiceAsyncClient() # maybe pass some client_options here?

async def invoke_invoice_processor(self, filebytes):
    raw_document = documentai.RawDocument(
        content=filebytes,
        mime_type="application/pdf",
    )
    request = documentai.ProcessRequest(
        name=api_path,
        raw_document=raw_document,
    )
    response = await documentai_client.process_document(request=request)
    return response.document

上述代码块可与PDFS 10页及以下一起使用。 只有大于10页的PDF失败

我的问题:我需要更改上述代码以成功处理10页以上的较大PDF?

The DocumentProcessorServiceAsyncClient.process_document method is erring out with the following error message: 400 Document pages exceed the limit: "PAGE_LIMIT_EXCEEDED". According to the API documentation this processes should be able to handle a maximum of 200 pages. By using the DocumentProcessorServiceAsyncClient and not the DocumentProcessorServiceClient, I assumed that I would be able to leverage the asynchronous maximum page limit. This does not appear to be the case.

The sample code I am testing:

api_path = f'projects/{project_id}/locations/{gcloud_region}/processors/{processor_id}'
documentai_client = documentai.DocumentProcessorServiceAsyncClient() # maybe pass some client_options here?

async def invoke_invoice_processor(self, filebytes):
    raw_document = documentai.RawDocument(
        content=filebytes,
        mime_type="application/pdf",
    )
    request = documentai.ProcessRequest(
        name=api_path,
        raw_document=raw_document,
    )
    response = await documentai_client.process_document(request=request)
    return response.document

The above code block works with PDFs 10 pages and under. It only fails with PDFs larger than 10 pages.

MY question: what do I need to change about the above code to successfully process larger PDFs over 10 pages?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

无畏 2025-01-28 16:49:02

yan-hic@的评论是正确的

较晚的答案,但正如我猜想的那样,200个限制是针对批处理请求的,根据定义,这是异步的。混乱来自于客户库中还有一个异步客户端的事实。使用 batch_process_documents 在任一客户中使用10页超过10页。

要添加更多详细信息,请按照批处理处理一次,一次发送多个文档,并发送比在线处理更多的页面。异步客户端不会影响处理器或平台的页面限制。

https://cloud.google.com/document-ument.com/document-ai = >

import re

from google.api_core.client_options import ClientOptions
from google.api_core.exceptions import InternalServerError
from google.api_core.exceptions import RetryError
from google.cloud import documentai
from google.cloud import storage

# TODO(developer): Uncomment these variables before running the sample.
# project_id = 'YOUR_PROJECT_ID'
# location = 'YOUR_PROCESSOR_LOCATION' # Format is 'us' or 'eu'
# processor_id = 'YOUR_PROCESSOR_ID' # Create processor before running sample
# gcs_input_uri = "YOUR_INPUT_URI" # Format: gs://bucket/directory/file.pdf
# input_mime_type = "application/pdf"
# gcs_output_bucket = "YOUR_OUTPUT_BUCKET_NAME" # Format: gs://bucket
# gcs_output_uri_prefix = "YOUR_OUTPUT_URI_PREFIX" # Format: directory/subdirectory/
# field_mask = "text,entities,pages.pageNumber"  # Optional. The fields to return in the Document object.


def batch_process_documents(
    project_id: str,
    location: str,
    processor_id: str,
    gcs_input_uri: str,
    input_mime_type: str,
    gcs_output_bucket: str,
    gcs_output_uri_prefix: str,
    field_mask: str = None,
    timeout: int = 400,
):

    # You must set the api_endpoint if you use a location other than 'us'.
    opts = ClientOptions(api_endpoint=f"{location}-documentai.googleapis.com")

    client = documentai.DocumentProcessorServiceClient(client_options=opts)

    gcs_document = documentai.GcsDocument(
        gcs_uri=gcs_input_uri, mime_type=input_mime_type
    )

    # Load GCS Input URI into a List of document files
    gcs_documents = documentai.GcsDocuments(documents=[gcs_document])
    input_config = documentai.BatchDocumentsInputConfig(gcs_documents=gcs_documents)

    # NOTE: Alternatively, specify a GCS URI Prefix to process an entire directory
    #
    # gcs_input_uri = "gs://bucket/directory/"
    # gcs_prefix = documentai.GcsPrefix(gcs_uri_prefix=gcs_input_uri)
    # input_config = documentai.BatchDocumentsInputConfig(gcs_prefix=gcs_prefix)
    #

    # Cloud Storage URI for the Output Directory
    # This must end with a trailing forward slash `/`
    destination_uri = f"{gcs_output_bucket}/{gcs_output_uri_prefix}"

    gcs_output_config = documentai.DocumentOutputConfig.GcsOutputConfig(
        gcs_uri=destination_uri, field_mask=field_mask
    )

    # Where to write results
    output_config = documentai.DocumentOutputConfig(gcs_output_config=gcs_output_config)

    # The full resource name of the processor, e.g.:
    # projects/project_id/locations/location/processor/processor_id
    name = client.processor_path(project_id, location, processor_id)

    request = documentai.BatchProcessRequest(
        name=name,
        input_documents=input_config,
        document_output_config=output_config,
    )

    # BatchProcess returns a Long Running Operation (LRO)
    operation = client.batch_process_documents(request)

    # Continually polls the operation until it is complete.
    # This could take some time for larger files
    # Format: projects/PROJECT_NUMBER/locations/LOCATION/operations/OPERATION_ID
    try:
        print(f"Waiting for operation {operation.operation.name} to complete...")
        operation.result(timeout=timeout)
    # Catch exception when operation doesn't finish before timeout
    except (RetryError, InternalServerError) as e:
        print(e.message)

    # NOTE: Can also use callbacks for asynchronous processing
    #
    # def my_callback(future):
    #   result = future.result()
    #
    # operation.add_done_callback(my_callback)

    # Once the operation is complete,
    # get output document information from operation metadata
    metadata = documentai.BatchProcessMetadata(operation.metadata)

    if metadata.state != documentai.BatchProcessMetadata.State.SUCCEEDED:
        raise ValueError(f"Batch Process Failed: {metadata.state_message}")

    storage_client = storage.Client()

    print("Output files:")
    # One process per Input Document
    for process in metadata.individual_process_statuses:
        # output_gcs_destination format: gs://BUCKET/PREFIX/OPERATION_NUMBER/INPUT_FILE_NUMBER/
        # The Cloud Storage API requires the bucket name and URI prefix separately
        matches = re.match(r"gs://(.*?)/(.*)", process.output_gcs_destination)
        if not matches:
            print(
                "Could not parse output GCS destination:",
                process.output_gcs_destination,
            )
            continue

        output_bucket, output_prefix = matches.groups()

        # Get List of Document Objects from the Output Bucket
        output_blobs = storage_client.list_blobs(output_bucket, prefix=output_prefix)

        # Document AI may output multiple JSON files per source file
        for blob in output_blobs:
            # Document AI should only output JSON files to GCS
            if ".json" not in blob.name:
                print(
                    f"Skipping non-supported file: {blob.name} - Mimetype: {blob.content_type}"
                )
                continue

            # Download JSON File as bytes object and convert to Document Object
            print(f"Fetching {blob.name}")
            document = documentai.Document.from_json(
                blob.download_as_bytes(), ignore_unknown_fields=True
            )

            # For a full list of Document object attributes, please reference this page:
            # https://cloud.google.com/python/docs/reference/documentai/latest/google.cloud.documentai_v1.types.Document

            # Read the text recognition output from the processor
            print("The document contains the following text:")
            print(document.text)

This comment from yan-hic@ is correct

Late answer but as I guess you figured, the limit of 200 is for batch requests, which are asynchronous by definition. The confusion comes from the fact there is also an async Client in the client libraries. Use the batch_process_documents in either client to go over 10 pages.

To add more detail, follow the code sample provided in send a processing request for Batch processing to send multiple documents at once and send more pages than possible for Online Processing. The Async Client does not affect the page limitations for the processor or the platform.

https://cloud.google.com/document-ai/quotas#content_limits

import re

from google.api_core.client_options import ClientOptions
from google.api_core.exceptions import InternalServerError
from google.api_core.exceptions import RetryError
from google.cloud import documentai
from google.cloud import storage

# TODO(developer): Uncomment these variables before running the sample.
# project_id = 'YOUR_PROJECT_ID'
# location = 'YOUR_PROCESSOR_LOCATION' # Format is 'us' or 'eu'
# processor_id = 'YOUR_PROCESSOR_ID' # Create processor before running sample
# gcs_input_uri = "YOUR_INPUT_URI" # Format: gs://bucket/directory/file.pdf
# input_mime_type = "application/pdf"
# gcs_output_bucket = "YOUR_OUTPUT_BUCKET_NAME" # Format: gs://bucket
# gcs_output_uri_prefix = "YOUR_OUTPUT_URI_PREFIX" # Format: directory/subdirectory/
# field_mask = "text,entities,pages.pageNumber"  # Optional. The fields to return in the Document object.


def batch_process_documents(
    project_id: str,
    location: str,
    processor_id: str,
    gcs_input_uri: str,
    input_mime_type: str,
    gcs_output_bucket: str,
    gcs_output_uri_prefix: str,
    field_mask: str = None,
    timeout: int = 400,
):

    # You must set the api_endpoint if you use a location other than 'us'.
    opts = ClientOptions(api_endpoint=f"{location}-documentai.googleapis.com")

    client = documentai.DocumentProcessorServiceClient(client_options=opts)

    gcs_document = documentai.GcsDocument(
        gcs_uri=gcs_input_uri, mime_type=input_mime_type
    )

    # Load GCS Input URI into a List of document files
    gcs_documents = documentai.GcsDocuments(documents=[gcs_document])
    input_config = documentai.BatchDocumentsInputConfig(gcs_documents=gcs_documents)

    # NOTE: Alternatively, specify a GCS URI Prefix to process an entire directory
    #
    # gcs_input_uri = "gs://bucket/directory/"
    # gcs_prefix = documentai.GcsPrefix(gcs_uri_prefix=gcs_input_uri)
    # input_config = documentai.BatchDocumentsInputConfig(gcs_prefix=gcs_prefix)
    #

    # Cloud Storage URI for the Output Directory
    # This must end with a trailing forward slash `/`
    destination_uri = f"{gcs_output_bucket}/{gcs_output_uri_prefix}"

    gcs_output_config = documentai.DocumentOutputConfig.GcsOutputConfig(
        gcs_uri=destination_uri, field_mask=field_mask
    )

    # Where to write results
    output_config = documentai.DocumentOutputConfig(gcs_output_config=gcs_output_config)

    # The full resource name of the processor, e.g.:
    # projects/project_id/locations/location/processor/processor_id
    name = client.processor_path(project_id, location, processor_id)

    request = documentai.BatchProcessRequest(
        name=name,
        input_documents=input_config,
        document_output_config=output_config,
    )

    # BatchProcess returns a Long Running Operation (LRO)
    operation = client.batch_process_documents(request)

    # Continually polls the operation until it is complete.
    # This could take some time for larger files
    # Format: projects/PROJECT_NUMBER/locations/LOCATION/operations/OPERATION_ID
    try:
        print(f"Waiting for operation {operation.operation.name} to complete...")
        operation.result(timeout=timeout)
    # Catch exception when operation doesn't finish before timeout
    except (RetryError, InternalServerError) as e:
        print(e.message)

    # NOTE: Can also use callbacks for asynchronous processing
    #
    # def my_callback(future):
    #   result = future.result()
    #
    # operation.add_done_callback(my_callback)

    # Once the operation is complete,
    # get output document information from operation metadata
    metadata = documentai.BatchProcessMetadata(operation.metadata)

    if metadata.state != documentai.BatchProcessMetadata.State.SUCCEEDED:
        raise ValueError(f"Batch Process Failed: {metadata.state_message}")

    storage_client = storage.Client()

    print("Output files:")
    # One process per Input Document
    for process in metadata.individual_process_statuses:
        # output_gcs_destination format: gs://BUCKET/PREFIX/OPERATION_NUMBER/INPUT_FILE_NUMBER/
        # The Cloud Storage API requires the bucket name and URI prefix separately
        matches = re.match(r"gs://(.*?)/(.*)", process.output_gcs_destination)
        if not matches:
            print(
                "Could not parse output GCS destination:",
                process.output_gcs_destination,
            )
            continue

        output_bucket, output_prefix = matches.groups()

        # Get List of Document Objects from the Output Bucket
        output_blobs = storage_client.list_blobs(output_bucket, prefix=output_prefix)

        # Document AI may output multiple JSON files per source file
        for blob in output_blobs:
            # Document AI should only output JSON files to GCS
            if ".json" not in blob.name:
                print(
                    f"Skipping non-supported file: {blob.name} - Mimetype: {blob.content_type}"
                )
                continue

            # Download JSON File as bytes object and convert to Document Object
            print(f"Fetching {blob.name}")
            document = documentai.Document.from_json(
                blob.download_as_bytes(), ignore_unknown_fields=True
            )

            # For a full list of Document object attributes, please reference this page:
            # https://cloud.google.com/python/docs/reference/documentai/latest/google.cloud.documentai_v1.types.Document

            # Read the text recognition output from the processor
            print("The document contains the following text:")
            print(document.text)
如梦 2025-01-28 16:49:02

如果一个人不一定需要所有页面来处理该文档,则可以通过一个选项设置:反馈prococessoptions

其中有以下选项:

个体pageSelector对象(单个pagesElector)
哪个页面要处理(1个索引)。

从整数
从一开始就只处理某些页面。如果文档的页面更少,则处理所有内容。

falenend Integer
仅处理某些页面,与上述相同。

因此,人们可以将当前最大值设置为任何最大值(例如10),它将仅基于第一页来处理请求,从而消除page_limit_exceeded错误。

示例代码看起来像这样:

$filename = 'your-filename-here';
$object = new StdClass();
$object->skipHumanReview = true;
$object->rawDocument = new StdClass();
$object->rawDocument->mimeType = mime_content_type($filename);
$object->rawDocument->content = base64_encode(trim(file_get_contents($filename)));
$object->processOptions = new StdClass();
$object->processOptions->fromStart = 10;

$json = json_encode($object, JSON_UNESCAPED_SLASHES);

请参阅 https> https:// 。

If one does not necessarily need all pages to process the document, there is an option set to pass: feedbackProcessOptions

Within that there are the following options:

individualPageSelector object (IndividualPageSelector)
Which pages to process (1-indexed).

fromStart integer
Only process certain pages from the start. Process all if the document has fewer pages.

fromEnd integer
Only process certain pages from the end, same as above.

Thus one can set fromStart to whatever the max-value is currently (e.g. 10) and it will process the request based on just those first pages, negating the PAGE_LIMIT_EXCEEDED error.

Example code would look something like this:

$filename = 'your-filename-here';
$object = new StdClass();
$object->skipHumanReview = true;
$object->rawDocument = new StdClass();
$object->rawDocument->mimeType = mime_content_type($filename);
$object->rawDocument->content = base64_encode(trim(file_get_contents($filename)));
$object->processOptions = new StdClass();
$object->processOptions->fromStart = 10;

$json = json_encode($object, JSON_UNESCAPED_SLASHES);

See https://cloud.google.com/document-ai/docs/reference/rest/v1/ProcessOptions for more information.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文