如果数据使用经过校验和的 TCP,FTP 协议为何有时会产生传输错误?

发布于 2024-10-11 11:32:10 字数 319 浏览 8 评论 0原文

每隔一段时间,通过 ftp 下载(特别大)文件就会产生错误。我猜这也是所有主要网站在下载的同时发布外部校验和的部分原因。

如果 ftp 通过 TCP,而 TCP 具有内置校验和,并且如果传输损坏会重新发送数据,那么这怎么可能呢?

有人可能会说这是由于 TCP 协议中 CRC 的长度较短(我认为是 16 位或类似的东西)造成的,而且冲突发生得太频繁了。但 1) 为此,不仅必须存在 CRC 冲突,而且随机网络错误必须同时修改数据包中的 CRC 和数据包本身,以便 CRC 对新数据包有效...即使有 16 位 CRC,这有可能吗? 2) 浏览同样通过 TCPIP 的网页时,似乎没有太多错误。

Every once in a while, downloading (especially large) files through ftp will produce errors. I am guessing that's also partly the reason why all major sites are publishing external checksums along with their downloads.

How is this possible if ftp goes through TCP, which has checksum inbuilt and resends data if it is transmitted corruptly?

One could argue that this is due to the short length of the CRC in the TCP protocol (which is 16bit I think, or something like that), and the collisions are simply happening too often. but
1) for this to be true, not only must there be a CRC collision, but also the random network error must modify both the CRC in the packet, and the packet itself so that the CRC will be valid for the new packet... Even with 16 bitCRC, is that so likely?
2) There are seemingly not many errors in, say, browsing the web which also goes through TCPIP.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

老子叫无熙 2024-10-18 11:32:10

FTP 区分 ASCII 和 BINARY 数据,并可以相应地修改数据流,这是我遇到的 FTP 下载损坏的最常见原因。
(TCP 校验和将根据修改后的数据进行计算,因此不会出现任何内容
TCP 级别的错误。)

我认为下一个最常见的情况是由于超时而被截断的传输
或其他网络错误。在这种情况下,TCP 校验和在本地是正确的,但是
部分下载的文件已损坏。

FTP 协议对防火墙有点不友好,因为它可能涉及外部主机重新连接
不可预测的端口号,但这通常表现为无法传输
任何东西,而不是损坏的下载。

除了 ASCII 与 BINARY 问题之外,我想不出 FTP 连接应该这样做的原因
更容易受到传输损坏的影响。也许你只是更多地注意到它们,因为
它们往往是需要逐位处理的二进制文件或压缩文件之类的东西
完整且正确,如果不完整,您会收到一条丑陋的错误消息。人们不太可能注意到广告缺失等问题
在网页上,因为与广告网络的连接超时。

FTP distinguishes between ASCII and BINARY data, and can modify the data stream accordingly, which is the most common reason I've encountered for corrupted FTP downloads.
(The TCP checksums would be computed on the modified data, so nothing would appear
amiss at the TCP level.)

Next most common, I suppose, would be a transfer that gets truncated due to a timeout
or other network error. In that case the TCP checksums would be locally correct, but
the partially downloaded file is corrupt.

The FTP protocol is a bit firewall-unfriendly, since it can involve external hosts connecting back on
unpredictable port numbers, but that usually manifests as an inability to transfer
anything at all, rather than a corrupted download.

Apart from ASCII vs. BINARY issues, I can't think of a reason why FTP connections should
be more susceptible to corrupted transfers. Maybe you just notice them more, because
they tend to be things like binaries or compressed files that need to be bit-for-bit
complete and correct, and if not you get a big ugly error message. One is much less likely to notice, say, a missing advertisement
on a web page because the connection to the ad network timed out.

姜生凉生 2024-10-18 11:32:10

16 位校验和并没有那么强大,特别是当您考虑某些 FTP 传输(例如软件下载)的大小时。然而,较低层有 CRC 等进行补偿。

我认为本世纪我自己还没有进行过损坏的 FTP 下载。

A 16-bit checksum isn't startlingly strong, especially when you consider the size of some FTP transfers, e.g. software downloads. However there are CRCs and so forth at the lower layers which compensates.

I don't think I've had a corrupt FTP download this century myself.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文