如何计算上传大文件的最佳块大小
是否存在处理大文件的最佳块大小?我有一个上传服务(WCF),用于接受数百兆字节的文件上传。
我尝试过 4KB、8KB 到 1MB 的块大小。更大的块大小有利于性能(更快的处理),但它是以内存为代价的。
那么,有没有办法计算出上传文件时的最佳块大小。如何进行这样的计算呢?是否是可用内存和客户端、CPU 和网络带宽的组合来确定最佳大小?
干杯
编辑:可能应该提到客户端应用程序将在 silverlight 中。
Is there such a thing as an optimum chunk size for processing large files? I have an upload service (WCF) which is used to accept file uploads ranging from several hundred megabytes.
I've experimented with 4KB, 8KB through to 1MB chunk sizes. Bigger chunk sizes is good for performance (faster processing) but it comes at the cost of memory.
So, is there way to work out the optimum chunk size at the moment of uploading files. How would one go about doing such calculations? Would it be a combination of available memory and the client, CPU and network bandwidth which determines the optimum size?
Cheers
EDIT: Probably should mention that the client app will be in silverlight.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
如果您担心资源耗尽,那么最好通过根据系统可用内存评估峰值上传并发数来确定最佳方案。一次同时进行多少个上传将是您可能进行的任何计算中的关键变量。您所要做的就是确保有足够的内存来处理上传并发性,而实现这一点相当简单。内存很便宜,在并发性超出内存可用性之前,您可能会耗尽网络带宽。
在性能方面,这并不是您在应用程序设计和开发过程中真正可以优化的东西。您必须让系统就位,用户真实上传文件,然后您才能监控实际的运行时性能。
尝试使用与您的网络 TCP/IP 窗口大小< /a>.这大约是您在设计时真正需要的最佳状态。
If you are concerned about running out of resources, then the optimum is probably best determined by evaluating your peak upload concurrency against your system's available memory. How many simultaneous uploads you have in progress at a time would be the key critical variable in any calculation you might do. All you have to do is make sure you have enough memory to handle the upload concurrency, and that's rather trivial to achieve. Memory is cheap and you will likely run out of network bandwidth long before you get to the point where your concurrency would overrun your memory availability.
On the performance side, this isn't the kind of thing you can really optimize much during app design and development. You have to have the system in place, users uploading files for real, and then you can monitor actual runtime performance.
Try a chunk size that matches your network's TCP/IP window size. That's about as optimal as you'd really need to get at design time.