JVM 在 ZipFile.getNextEntry() 处崩溃 - POI BigGridDemo 的一部分
我有一个 Java 应用程序来生成 Excel 工作表。我正在基于 Apache POI 的 BigGridDemo 示例来生成 Excel(xlsx)。
这个想法是
- 创建一个模板工作簿,创建工作表和全局对象,例如单元格样式、数字格式等。
- 创建一个在文本文件中流式传输数据的应用程序
- 用生成的数据替换模板中的工作表
在Linux中,在第三步中,JVM 因此信息而崩溃
# A fatal error has been detected by the Java Runtime Environment:
# SIGSEGV (0xb) at pc=0x000000307a772c44, pid=11781, tid=1088649568
#
# JRE version: 6.0_24-b07
# Java VM: Java HotSpot(TM) 64-Bit Server VM (19.1-b02 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libc.so.6+0x72c44] memcpy+0x34
hs_err_pid 文件具有此内容 -
C [libc.so.6+0x72c44] memcpy+0x34
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j java.util.zip.ZipFile.getNextEntry(JI)J+0
j java.util.zip.ZipFile.access$400(JI)J+2
j java.util.zip.ZipFile$2.nextElement()Ljava/util/zip/ZipEntry;+54
j java.util.zip.ZipFile$2.nextElement()Ljava/lang/Object;+1
看起来当模板工作簿作为 zip 文件读取时会发生这种情况。这是执行此操作的代码。
ZipFile zip = new ZipFile(zipfile);
ZipOutputStream zos = new ZipOutputStream(out);
Enumeration<ZipEntry> en = (Enumeration<ZipEntry>) zip.entries();
while (en.hasMoreElements()) {
ZipEntry ze = en.nextElement();
if(!ze.getName().equals(entry)){
zos.putNextEntry(new ZipEntry(ze.getName()));
InputStream is = zip.getInputStream(ze);
copyStream(is, zos);
is.close();
}
}
我怎样才能避免这次崩溃?
I have a Java application to generate Excel sheets. I am doing it based on the BigGridDemo Example of Apache POI to generate Excel(xlsx).
The idea is to
- create a template workbook, create sheets and global objects such as cell styles,number formats etc.
- create an application that streams data in a text file
- Substitute the sheet in the template with the generated data
In Linux, during the 3rd step, the JVM crashes with this info
# A fatal error has been detected by the Java Runtime Environment:
# SIGSEGV (0xb) at pc=0x000000307a772c44, pid=11781, tid=1088649568
#
# JRE version: 6.0_24-b07
# Java VM: Java HotSpot(TM) 64-Bit Server VM (19.1-b02 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libc.so.6+0x72c44] memcpy+0x34
The hs_err_pid file has this -
C [libc.so.6+0x72c44] memcpy+0x34
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j java.util.zip.ZipFile.getNextEntry(JI)J+0
j java.util.zip.ZipFile.access$400(JI)J+2
j java.util.zip.ZipFile$2.nextElement()Ljava/util/zip/ZipEntry;+54
j java.util.zip.ZipFile$2.nextElement()Ljava/lang/Object;+1
Looks like this happens when the template workbook is read as a zip file. This is the code that does this.
ZipFile zip = new ZipFile(zipfile);
ZipOutputStream zos = new ZipOutputStream(out);
Enumeration<ZipEntry> en = (Enumeration<ZipEntry>) zip.entries();
while (en.hasMoreElements()) {
ZipEntry ze = en.nextElement();
if(!ze.getName().equals(entry)){
zos.putNextEntry(new ZipEntry(ze.getName()));
InputStream is = zip.getInputStream(ze);
copyStream(is, zos);
is.close();
}
}
How can I avoid this crash?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
鉴于您遇到的崩溃类型,看来您正在读取和写入同一个 zip 文件。
我在另一个答案中写下了为什么会发生这种情况,但对于您的用例,您应该在迭代输入 zip 文件时写入不同的输出 zip 文件。
Given the kind of crash you're having, it appears you are reading and writing to the same zip file.
I wrote up why this happens in another answer, but for your use case, you should write to a different output zip file as you iterate through the input zip file.
如果您告诉 JVM 使用 1GB、2GB 等,而系统没有那么多可用内存,那么 JVM 将在 Linux(可能是其他平台)上崩溃。当 JVM 中的程序尝试分配比 JVM 上的最大内存设置更多的内存时,会导致 OutOfMemoryException,并且 JVM 不会崩溃。我会检查并确保您的系统上没有其他程序占用的内存比您想象的要多。您还可以检查来自 JVM 的堆转储。当 JVM 崩溃时,它会写出一份报告,告诉您有关机器崩溃时的状态的更多信息,例如使用了多少系统内存。
1GB 是一个巨大的内存量。我的 IDE 无法使用那么多内存运行。您可能需要查看您在 jconsole 或分析器中所做的事情,看看您自己是否正在耗尽内存。我知道 POI 会占用大量内存,但您需要找到一种方法将其拉回来。
The JVM will crash on linux (possibly other platforms) if you tell it to use 1GB, 2GB, etc and the system doesn't have that much memory available. When the program in the JVM tries to allocate more memory than the max memory setting on the JVM that results in an OutOfMemoryException and the JVM doesn't crash. I'd check and make sure there isn't another program on your system that's eating up more memory than you realize. You can also check the heap dumps that come out of the JVM. When the JVM crashes it writes out a report that tells you more information about the state of the machine when it crashed like how much system memory was being used.
1GB is a massive amount of memory. My IDE doesn't run with that much memory. You probably need to look at what your doing in jconsole or a profiler and see if you're eating up memory yourself. I know that POI can chew up a lot of memory, but you need to figure out a way to pull it back down.