使用boto3将XML文件上传到S3存储桶

发布于 2025-01-22 18:02:21 字数 496 浏览 0 评论 0原文

我正在尝试将XML文件上传到带有Lambda功能和HTTP请求的AWS的S3存储桶中。主要问题是我无法转换由HTTP请求的正文传递的XML文件。

import json
import boto3

s3 = boto3.client('s3')

def lambda_handler(event, context):
    bucket = 'xmlresultmarco'
    
    fileName = 'Test'+'.xml'
    
    uploadByteStream = event

    try:
        s3.put_object(Bucket = bucket, Key = fileName, Body = uploadByteStream )
        return "Upload completed"
    
    except Exception as e:
        return e
        raise e

有人可以帮助我吗?

I am trying to upload an XML file to a s3 bucket with AWS with a lambda function and a HTTP request. The principal problem is that I am not able to convert the XML file passed by the body of the HTTP request.

import json
import boto3

s3 = boto3.client('s3')

def lambda_handler(event, context):
    bucket = 'xmlresultmarco'
    
    fileName = 'Test'+'.xml'
    
    uploadByteStream = event

    try:
        s3.put_object(Bucket = bucket, Key = fileName, Body = uploadByteStream )
        return "Upload completed"
    
    except Exception as e:
        return e
        raise e

Is there someone that can help me?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

北笙凉宸 2025-01-29 18:02:21

这是一些可以让您入门的代码。

lambda函数定义可能看起来像这样。

def download_lambda(event, context):
    url = event['url']
    bucket = event['bucket_name']
    key = event['key']
    download_url(url, bucket_name=bucket, key=key)

然后下载功能可能看起来像这样。我使用多部分来满足大型文件。

def download_url(url, bucket_name, key):
    response = requests.get(url, stream=True)
    response.raise_for_status()
    assert response.status_code == 200
    obj = s3.Object(bucket_name, key)
    multi_part_upload = obj.initiate_multipart_upload()
    partNo = 1
    parts = []
    default_chunk_size = 1024*1024*10 # Must be at least 5mb
    for chunk in response.iter_content(chunk_size=default_chunk_size):
        if chunk:
            part = multi_part_upload.Part(partNo)
            response = part.upload(Body=chunk)
            parts.append({
                'PartNumber': partNo,
                'ETag': response['ETag']
            })
            partNo += 1
    part_info = {}
    part_info['Parts'] = parts
    multi_part_upload.complete(MultipartUpload=part_info)

显然,根据Mark的观点,我在这里没有处理任何XML转换。阅读文件时,您需要弄清楚这一点。

Here is some code to get you started.

The lambda function definition might look something like this.

def download_lambda(event, context):
    url = event['url']
    bucket = event['bucket_name']
    key = event['key']
    download_url(url, bucket_name=bucket, key=key)

Then the download function might look like this. I use multi-part to cater for large files.

def download_url(url, bucket_name, key):
    response = requests.get(url, stream=True)
    response.raise_for_status()
    assert response.status_code == 200
    obj = s3.Object(bucket_name, key)
    multi_part_upload = obj.initiate_multipart_upload()
    partNo = 1
    parts = []
    default_chunk_size = 1024*1024*10 # Must be at least 5mb
    for chunk in response.iter_content(chunk_size=default_chunk_size):
        if chunk:
            part = multi_part_upload.Part(partNo)
            response = part.upload(Body=chunk)
            parts.append({
                'PartNumber': partNo,
                'ETag': response['ETag']
            })
            partNo += 1
    part_info = {}
    part_info['Parts'] = parts
    multi_part_upload.complete(MultipartUpload=part_info)

Clearly I am not dealing with any XML conversion here, per Mark's point. You will need to figure that out when you read the file.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文