c#如何通过GRPC上传到Minio(AWS S3兼容API)而无需缓冲数据?

发布于 2025-01-30 10:29:07 字数 2955 浏览 1 评论 0原文

如何通过GRPC服务将大文件上传到Minio(AWS S3兼容API)而不缓冲数据?

我拥有以下定义的GRPC服务:

service MediaService {
    rpc UploadMedia(stream UploadMediaRequest) returns (UploadMediaResponse);
}

message UploadMediaRequest {
    oneof Data {
        UploadMediaMetadata metadata = 1;
        UploadMediaStream fileStream = 2;
    }
}

message UploadMediaMetadata {
    string bucket = 1;
    string virtialDirectory = 2;
    string fileName = 3;
    string contentType = 4;
    map<string, string> attributes = 6;
}

message UploadMediaStream {
    bytes bytes = 1;
}

uploadMedia的实现:

        public override async Task<UploadMediaResponse> UploadMedia(
            IAsyncStreamReader<UploadMediaRequest> requestStream,
            ServerCallContext context)
        {
            UploadMediaMetadata? metadata = null;
            var token = context.CancellationToken;
            var traceId = context.GetHttpContext().TraceIdentifier;

            await using var memoryStream = new MemoryStream();
            await foreach (var req in requestStream.ReadAllAsync(token))
            {
                if (req.DataCase == UploadMediaRequest.DataOneofCase.Metadata)
                {
                    metadata = req.Metadata;
                    _logger.LogTrace("[Req: {TraceId}] Received metadata", traceId);
                }
                else
                {
                    await memoryStream.WriteAsync(req.FileStream.Bytes.Memory, token);
                    _logger.LogTrace("[Req: {TraceId}] Received chunk of bytes", traceId);
                }
            }

            if (metadata == null)
            {
                throw new RpcException(new Status(StatusCode.InvalidArgument, "Not found metadata."));
            }

            memoryStream.Seek(0L, SeekOrigin.Begin);

            var uploadModel = _mapper.Map<UploadModel>(metadata);
            uploadModel.FileStream = memoryStream;

            var file = await _fileService.UploadFile(uploadModel, token);
            await _eventsService.Notify(new MediaUploadedEvent(file.PublicId), token);

            _logger.LogTrace("[Req: {TraceId}] File uploaded", traceId);

            return new UploadMediaResponse { File = _mapper.Map<RpcFileModel>(file) };
        }

在方法上,我读取请求流并将数据写入MemoryStream。之后,我将文件上传到存储:

        var putObjectArgs = new PutObjectArgs()
                            .WithStreamData(fileStream)
                            .WithObjectSize(fileStream.Length)
                            .WithObject(virtualPath)
                            .WithBucket(bucket)
                            .WithContentType(contentType)
                            .WithHeaders(attributes);

        return _storage.PutObjectAsync(putObjectArgs, token);

我想在不将数据缓冲的情况下上传文件上传。 我想我可以从流到磁盘上编写字节,然后创建fileStream,但是我不想再依赖。

How can I upload large files to MinIO (AWS S3 compatible API) via gRPC service without buffering data?

I have gRPC service with following definition:

service MediaService {
    rpc UploadMedia(stream UploadMediaRequest) returns (UploadMediaResponse);
}

message UploadMediaRequest {
    oneof Data {
        UploadMediaMetadata metadata = 1;
        UploadMediaStream fileStream = 2;
    }
}

message UploadMediaMetadata {
    string bucket = 1;
    string virtialDirectory = 2;
    string fileName = 3;
    string contentType = 4;
    map<string, string> attributes = 6;
}

message UploadMediaStream {
    bytes bytes = 1;
}

And implementation of UploadMedia:

        public override async Task<UploadMediaResponse> UploadMedia(
            IAsyncStreamReader<UploadMediaRequest> requestStream,
            ServerCallContext context)
        {
            UploadMediaMetadata? metadata = null;
            var token = context.CancellationToken;
            var traceId = context.GetHttpContext().TraceIdentifier;

            await using var memoryStream = new MemoryStream();
            await foreach (var req in requestStream.ReadAllAsync(token))
            {
                if (req.DataCase == UploadMediaRequest.DataOneofCase.Metadata)
                {
                    metadata = req.Metadata;
                    _logger.LogTrace("[Req: {TraceId}] Received metadata", traceId);
                }
                else
                {
                    await memoryStream.WriteAsync(req.FileStream.Bytes.Memory, token);
                    _logger.LogTrace("[Req: {TraceId}] Received chunk of bytes", traceId);
                }
            }

            if (metadata == null)
            {
                throw new RpcException(new Status(StatusCode.InvalidArgument, "Not found metadata."));
            }

            memoryStream.Seek(0L, SeekOrigin.Begin);

            var uploadModel = _mapper.Map<UploadModel>(metadata);
            uploadModel.FileStream = memoryStream;

            var file = await _fileService.UploadFile(uploadModel, token);
            await _eventsService.Notify(new MediaUploadedEvent(file.PublicId), token);

            _logger.LogTrace("[Req: {TraceId}] File uploaded", traceId);

            return new UploadMediaResponse { File = _mapper.Map<RpcFileModel>(file) };
        }

At the method I read request stream and write data to MemoryStream. After that I upload file to storage:

        var putObjectArgs = new PutObjectArgs()
                            .WithStreamData(fileStream)
                            .WithObjectSize(fileStream.Length)
                            .WithObject(virtualPath)
                            .WithBucket(bucket)
                            .WithContentType(contentType)
                            .WithHeaders(attributes);

        return _storage.PutObjectAsync(putObjectArgs, token);

I want to upload files without buffering data in Memory.
I think I can write bytes from stream to disk and after that create FileStream, but I don't want one more dependency.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文