getResourceAsStream 返回 HttpInputStream,而不是整个文件

发布于 2024-09-02 04:47:18 字数 517 浏览 1 评论 0原文

我有一个带有小程序的网络应用程序,它将用小程序打包的文件复制到客户端计算机。

当我将其部署到网络服务器并使用: InputStream in = getClass().getResourceAsStream("filename") ;

对于我尝试过的每个文件,in.available() 始终返回 8192 字节的大小,这意味着该文件在复制到客户端计算机时已损坏。

InputStream 的类型为 HttpInputStream (sun.net.protocol.http.HttpUrlConnection$httpInputStream)。但是,当我在小程序查看器中测试小程序时,文件复制得很好,返回的 InputStream 类型为 BufferedInputStream,它具有文件的字节大小。我猜想当文件系统中的 getResourceStream 时将使用 BufferedInputStream,而在 http 协议时将使用 HttpInputStream。

如何完整复制文件,HttpInputStream 有大小限制吗? 多谢。

I am having a web application with an applet which will copy a file packed witht the applet to the client machine.

When I deploy it to webserver and use: InputStream in = getClass().getResourceAsStream("filename") ;

The in.available() always return a size of 8192 bytes for every file I tried, which means the file is corrupted when it is copied to the client computer.

The InputStream is of type HttpInputStream (sun.net.protocol.http.HttpUrlConnection$httpInputStream). But while I test applet in applet viewer, the files are copied fine, with the InputStream returned is of type BufferedInputStream, which has the file's byte sizes. I guess that when getResourceStream in file system the BufferedInputStream will be used and when at http protocol, HttpInputStream will be used.

How will I copy the file completely, is there a size limited for HttpInputStream?
Thanks a lot.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

旧伤慢歌 2024-09-09 04:47:18

in.available() 告诉您可以在不阻塞的情况下读取多少字节,而不是可以从流中读取的总字节数。

以下是将 InputStreamorg.apache.commons.io.IOUtils 复制到 OutputStream 的示例:

public static long copyLarge(InputStream input, OutputStream output)
        throws IOException {
    byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
    long count = 0;
    int n = 0;
    while (-1 != (n = input.read(buffer))) {
        output.write(buffer, 0, n);
        count += n;
    }
    return count;
}

in.available() tells you how many bytes you can read without blocking, not the total number of bytes you can read from a stream.

Here's an example of copying an InputStream to an OutputStream from org.apache.commons.io.IOUtils:

public static long copyLarge(InputStream input, OutputStream output)
        throws IOException {
    byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
    long count = 0;
    int n = 0;
    while (-1 != (n = input.read(buffer))) {
        output.write(buffer, 0, n);
        count += n;
    }
    return count;
}
忆沫 2024-09-09 04:47:18

对于我尝试过的每个文件,in.available() 总是返回 8192 字节的大小,这意味着该文件在复制到客户端计算机时已损坏。

根本不是这个意思!

in.available() 方法返回可以在不阻塞的情况下读取的字符数。它不是流的长度。一般来说,除了读取(或跳过)流中的所有字节之外,没有其他方法可以确定输入流的长度。

(您可能已经观察到,new FileInputStream("someFile").available() 通常会为您提供文件大小。但是规范保证这种行为,并且对于某些类型的文件当然不正确,对于某些类型的文件系统也可能不正确。获取文件大小的更好方法是 new File("someFile").length(),但是。即使这在某些情况下也不起作用。)

请参阅@tdavies 答案,了解复制整个流内容的示例代码。还有第三方库可以做这种事情;例如 org.apache.commons.net.io .Util

The in.available() always return a size of 8192 bytes for every file I tried, which means the file is corrupted when it is copied to the client computer.

It does not mean that at all!

The in.available() method returns the number of characters that can be read without blocking. It is not the length of the stream. In general, there is no way to determine the length of an InputStream apart from reading (or skipping) all bytes in the stream.

(You may have observed that new FileInputStream("someFile").available() usually gives you the file size. But that behaviour is not guaranteed by the spec, and is certainly untrue for some kinds of file, and possibly for some kinds of file system as well. A better way to get the size of a file is new File("someFile").length(), but even that doesn't work in some cases.)

See @tdavies answer for example code for copying an entire stream's contents. There are also third party libraries that can do this kind of thing; e.g. org.apache.commons.net.io.Util.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文