无法从 JSF 页面下载 zip 文件

发布于 2024-11-28 04:03:39 字数 2305 浏览 1 评论 0原文

我正在尝试将多个 PDF 文件下载为一个 zip 文件,然后更新 JSF 页面上的详细信息 - 有效地表明我正在处理这些文件。我在后台使用两个请求实现了这一点 - 1)更新数据库详细信息并刷新屏幕 2)下载 zip 文件。

这在单工作站 Windows 环境中工作得很好,但是当我在负载均衡器后面的 Linux 环境中部署它时,我在尝试下载 zip 时得到一个空白页面。我已经编写了 SOP 统计信息来打印通过 JSF BB 发送到 ServletOutputStream 的文件的大小,并且我发现正在打印正确的文件大小。但不知怎的,我总是丢失 zip 以及更新的 JSF。这种情况在Windows中也会随机出现,这让我很担心:(。请提供您宝贵的建议并帮助我解决这个问题。

您可能认为需要考虑的一些要点: 我使用的是Richfaces 3.3.3 Final,IE 8浏览器,响应传输编码类型是分块的。

==== BB 方法如下所示:

String checkoutDoc = service.checkout(docId,true,contract, error);
FacesContext ctx = FacesContext.getCurrentInstance();
HttpServletResponse response = (HttpServletResponse) ctx.getExternalContext().getResponse();
File tempPdf = new File(checkoutDoc);URI tempURI = tempPdf.toURI();
URL pdfURL = tempURI.toURL();ServletOutputStream outstream =response.getOutputStream();
try 
{
 URLConnection urlConn = pdfURL.openConnection();
 response.setContentType("application/zip");
 response.setHeader("Transfer-Encoding", "chunked");
 response.addHeader("Content-disposition", "attachment;filename="+docId.toString()+".zip" );
 BufferedInputStream bufInStrm = new BufferedInputStream (urlConn.getInputStream());
 int readBytes = 0;
 int bufferSize = 8192;
 byte[] buffer = new byte[bufferSize];
 while ((readBytes = bufInStrm.read(buffer)) != -1){
if (readBytes == bufferSize) {
    outstream.write(buffer);
 }else{
    outstream.write(buffer, 0, readBytes);
    }
    outstream.flush();
    response.flushBuffer();
    }
    bufInStrm.close();
    }finally{
     outstream.close();
    }
    FacesContext.getCurrentInstance().responseComplete();
}

下面给出了我使用 Firefox Http 监视器捕获的响应标头。

(Request-Line)  POST /XXX/application/pages/xxx.xhtml HTTP/1.1
Host    xxx.xxx.com
User-Agent  Mozilla/5.0 (Windows NT 5.1; rv:5.0.1) Gecko/20100101 Firefox/5.0.1
Accept  text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language en-us,en;q=0.5
Accept-Encoding gzip, deflate
Accept-Charset  ISO-8859-1,utf-8;q=0.7,*;q=0.7
Connection  keep-alive
Referer http://xxx.com/xxx/application/pages/xxx.xhtml
Cookie  JSESSIONID=E27C156AA37E5984073FAB847E4958D2.XXXX;  fontSize=null; pageWidth=fullWidth
Content-Type    multipart/form-data; boundary=---------------------------288695814700
Content-Length  1442

I am trying to download multiple PDF files as one zip file and then update the details on a JSF page - effectively showing that I am working on these files. I have achieved this using two requests behind the scene - 1) to update the DB details and refresh the screen 2) to download the zip file.

This works fine in single workstation windows environment, but the moment I deploy this in Linux environment, behind a load balancer, I get a blank page while trying to download the zip. I have written SOP stats to print the size of the file that is being sent to the ServletOutputStream via the JSF BB and I find that the right file sizes are being printed. But somehow I keep losing the zip as well as the updated JSF. This scenario also occurs randomly in Windows, which makes me worried :(. Please provide your valuable suggestions and help me out of this issue.

Some points that you might think for consideration:
I am using Richfaces 3.3.3 Final, IE 8 browser, response transmission encoding type is chunked.

====
The BB method is as given below:

String checkoutDoc = service.checkout(docId,true,contract, error);
FacesContext ctx = FacesContext.getCurrentInstance();
HttpServletResponse response = (HttpServletResponse) ctx.getExternalContext().getResponse();
File tempPdf = new File(checkoutDoc);URI tempURI = tempPdf.toURI();
URL pdfURL = tempURI.toURL();ServletOutputStream outstream =response.getOutputStream();
try 
{
 URLConnection urlConn = pdfURL.openConnection();
 response.setContentType("application/zip");
 response.setHeader("Transfer-Encoding", "chunked");
 response.addHeader("Content-disposition", "attachment;filename="+docId.toString()+".zip" );
 BufferedInputStream bufInStrm = new BufferedInputStream (urlConn.getInputStream());
 int readBytes = 0;
 int bufferSize = 8192;
 byte[] buffer = new byte[bufferSize];
 while ((readBytes = bufInStrm.read(buffer)) != -1){
if (readBytes == bufferSize) {
    outstream.write(buffer);
 }else{
    outstream.write(buffer, 0, readBytes);
    }
    outstream.flush();
    response.flushBuffer();
    }
    bufInStrm.close();
    }finally{
     outstream.close();
    }
    FacesContext.getCurrentInstance().responseComplete();
}

The response headers that I captured using Firefox Http monitor are given below.

(Request-Line)  POST /XXX/application/pages/xxx.xhtml HTTP/1.1
Host    xxx.xxx.com
User-Agent  Mozilla/5.0 (Windows NT 5.1; rv:5.0.1) Gecko/20100101 Firefox/5.0.1
Accept  text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language en-us,en;q=0.5
Accept-Encoding gzip, deflate
Accept-Charset  ISO-8859-1,utf-8;q=0.7,*;q=0.7
Connection  keep-alive
Referer http://xxx.com/xxx/application/pages/xxx.xhtml
Cookie  JSESSIONID=E27C156AA37E5984073FAB847E4958D2.XXXX;  fontSize=null; pageWidth=fullWidth
Content-Type    multipart/form-data; boundary=---------------------------288695814700
Content-Length  1442

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

我很OK 2024-12-05 04:03:39

如果您实际上没有使用 分块编码 自己使用例如 ChunkedOutputStream。当响应缓冲区已满并且响应内容长度未知时,Servlet API 将自动执行此操作。但是,每当您自己设置此标头而实际上没有以分块编码写出正文时,该行为完全未指定,并且取决于所使用的 servlet 容器。

删除该标头并让 Servlet API 完成其工作。为了提高性能(以便 Servlet API 在响应缓冲区已满时不会切换到分块编码),请同时设置响应内容长度标头。

话虽如此,您的流媒体方法有点笨拙。不需要将 File 传递到 URL,也不需要 for 循环中的 if-else。我可以向您建议以下吗?

// ...
File tempPdf = new File(checkoutDoc);

ExternalContext externalContext = FacesContext.getCurrentInstance().getExternalContext();
externalContext.setResponseContentType("application/zip");
externalContext.setResponseHeader("Content-Disposition", "attachment;filename=\"" + docId + ".zip\"");
externalContext.setResponseHeader("Content-Length", String.valueOf(tempPdf.length()));

Files.copy(tempPdf.toPath(), externalContext.getResponseOutputStream());
FacesContext.getCurrentInstance().responseComplete();

另请参阅:

You should not be setting the Transfer-Encoding: chunked header yourself if you are actually not writing out the file using chunked encoding yourself using for example ChunkedOutputStream. The Servlet API will do this automatically whenever the response buffer is full and the response content length is unknown. However, whenever you've set this header yourself without actually writing out the body in chunked encoding, the behaviour is fully unspecified and dependent on the servletcontainer used.

Remove that header and let the Servlet API do its job. To improve performance (so that the Servlet API won't switch to chunked encoding when the response buffer is full), set the response content length header as well.

Having said that, your streaming approach is a bit clumsy. Massaging File to URL is unnecessary and the if-else in the for loop is unnecessary. May I suggest you the following?

// ...
File tempPdf = new File(checkoutDoc);

ExternalContext externalContext = FacesContext.getCurrentInstance().getExternalContext();
externalContext.setResponseContentType("application/zip");
externalContext.setResponseHeader("Content-Disposition", "attachment;filename=\"" + docId + ".zip\"");
externalContext.setResponseHeader("Content-Length", String.valueOf(tempPdf.length()));

Files.copy(tempPdf.toPath(), externalContext.getResponseOutputStream());
FacesContext.getCurrentInstance().responseComplete();

See also:

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文