尝试打开非常大的文件时,打开命令出现段错误
我正在学校参加网络课程,并且是第一次使用 C/GDB。我们的任务是制作一个与客户端浏览器通信的网络服务器。我进展顺利,可以打开文件并将其发送给客户。一切都很顺利,直到我打开一个非常大的文件,然后出现段错误。我不是 C/GDB 的专业人士,所以如果这导致我提出愚蠢的问题并且自己无法看到解决方案,我很抱歉,但是当我查看转储的核心时,我发现我的段错误出现在这里:
if (-1 == (openfd = open(path, O_RDONLY)))
具体我们的任务是打开文件并将其发送到客户端浏览器。我的算法是:
- 打开/错误捕获
- 将文件读入缓冲区/错误捕获
- 发送文件
我们还负责确保服务器在发送非常大的文件时不会崩溃。但我的问题似乎是打开它们。我可以很好地发送所有较小的文件。相关文件大小为 29.5MB。
整个算法是:
ssize_t send_file(int conn, char *path, int len, int blksize, char *mime) {
int openfd; // File descriptor for file we open at path
int temp; // Counter for the size of the file that we send
char buffer[len]; // Buffer to read the file we are opening that is len big
// Open the file
if (-1 == (openfd = open(path, O_RDONLY))) {
send_head(conn, "", 400, strlen(ERROR_400));
(void) send(conn, ERROR_400, strlen(ERROR_400), 0);
logwrite(stdout, CANT_OPEN);
return -1;
}
// Read from file
if (-1 == read(openfd, buffer, len)) {
send_head(conn, "", 400, strlen(ERROR_400));
(void) send(conn, ERROR_400, strlen(ERROR_400), 0);
logwrite(stdout, CANT_OPEN);
return -1;
}
(void) close(openfd);
// Send the buffer now
logwrite(stdout, SUC_REQ);
send_head(conn, mime, 200, len);
send(conn, &buffer[0], len, 0);
return len;
}
我不知道这是否只是一个事实,即我是 Unix/C 新手。抱歉,如果是的话。 =( 但非常感谢您的帮助。
I'm taking a networking class at school and am using C/GDB for the first time. Our assignment is to make a webserver that communicates with a client browser. I am well underway and can open files and send them to the client. Everything goes great till I open a very large file and then I seg fault. I'm not a pro at C/GDB so I'm sorry if that is causing me to ask silly questions and not be able to see the solution myself but when I looked at the dumped core I see my seg fault comes here:
if (-1 == (openfd = open(path, O_RDONLY)))
Specifically we are tasked with opening the file and the sending it to the client browser. My Algorithm goes:
- Open/Error catch
- Read the file into a buffer/Error catch
- Send the file
We were also tasked with making sure that the server doesn't crash when SENDING very large files. But my problem seems to be with opening them. I can send all my smaller files just fine. The file in question is 29.5MB.
The whole algorithm is:
ssize_t send_file(int conn, char *path, int len, int blksize, char *mime) {
int openfd; // File descriptor for file we open at path
int temp; // Counter for the size of the file that we send
char buffer[len]; // Buffer to read the file we are opening that is len big
// Open the file
if (-1 == (openfd = open(path, O_RDONLY))) {
send_head(conn, "", 400, strlen(ERROR_400));
(void) send(conn, ERROR_400, strlen(ERROR_400), 0);
logwrite(stdout, CANT_OPEN);
return -1;
}
// Read from file
if (-1 == read(openfd, buffer, len)) {
send_head(conn, "", 400, strlen(ERROR_400));
(void) send(conn, ERROR_400, strlen(ERROR_400), 0);
logwrite(stdout, CANT_OPEN);
return -1;
}
(void) close(openfd);
// Send the buffer now
logwrite(stdout, SUC_REQ);
send_head(conn, mime, 200, len);
send(conn, &buffer[0], len, 0);
return len;
}
I dunno if it is just a fact that a I am Unix/C novice. Sorry if it is. =( But you're help is much appreciated.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
我可能只是误解了你在问题中的意思,但我觉得我应该指出,一般来说,尝试一次读取整个文件是一个坏主意,以防你处理的东西太大了你的记忆来处理。
更聪明的做法是分配一个特定大小的缓冲区,比如 8192 字节(好吧,无论如何,这就是我经常做的事情),并且总是根据需要读取和发送那么多数据,直到 read() 操作返回0(且未设置 errno)表示流结束。
It's possible I'm just misunderstanding what you meant in your question, but I feel I should point out that in general, it's a bad idea to try to read the entire file at once, in case you deal with something that's just too big for your memory to handle.
It's smarter to allocate a buffer of a specific size, say 8192 bytes (well, that's what I tend to do a lot, anyway), and just always read and send that much, as much as necessary, until your read() operation returns 0 (and no errno set) for end of stream.
我怀疑你有一个 stackoverflow(我应该因为在这个网站上使用这个术语而获得奖励积分)。
问题是您一次为堆栈上的整个文件分配缓冲区。对于较大的文件,此缓冲区比堆栈大,下次尝试调用函数(从而将其一些参数放在堆栈上)时,程序会崩溃。
崩溃出现在打开的行处,因为在堆栈上分配缓冲区实际上并不写入任何内存,它只是更改堆栈指针。当您调用 open 尝试将参数写入堆栈时,堆栈顶部现已溢出,这会导致崩溃。
解决方案是按照 Platinum Azure 或 dreamlax 的建议,一次读入文件一点点,或者在堆上分配缓冲区,将 malloc 或 new 分配给堆。
I suspect you have a stackoverflow (I should get bonus points for using that term on this site).
The problem is you are allocating the buffer for the entire file on the stack all at once. For larger files, this buffer is larger than the stack, and the next time you try to call a function (and thus put some parameters for it on the stack) the program crashes.
The crash appears at the open line because allocating the buffer on the stack doesn't actually write any memory, it just changes the stack pointer. When your call to open tries tow rite the parameters to the stack, the top of the stack is now overflown and this causes a crash.
The solution is as Platinum Azure or dreamlax suggest, read in the file little bits at a time or allocate your buffer on the heap will malloc or new.
可以尝试使用
malloc
分配内存,而不是使用可变长度数组。我只是在我的系统上做了一些简单的测试,当我使用大尺寸的可变长度数组(就像你遇到问题的尺寸)时,我也会得到一个SEGFAULT。
Rather than using a variable length array, perhaps try allocated the memory using
malloc
.I just did some simple tests on my system, and when I use variable length arrays of a big size (like the size you're having trouble with), I also get a SEGFAULT.
您在堆栈上分配缓冲区,但它太大了。
当您在堆栈上分配存储空间时,编译器所做的就是减少堆栈指针足以腾出足够的空间(这使堆栈变量分配保持恒定时间)。它不会尝试触及任何堆叠内存。然后,当您调用
open()
时,它会尝试将参数放入堆栈,并发现堆栈溢出并死亡。您需要以块的形式对文件进行操作、对其进行内存映射 (
mmap()
) 或malloc()
存储。另外,
path
应声明为const char*
。You're allocating the buffer on the stack, and it's way too big.
When you allocate storage on the stack, all the compiler does is decrease the stack pointer enough to make that much room (this keeps stack variable allocation to constant time). It does not try to touch any of this stacked memory. Then, when you call
open()
, it tries to put the parameters on the stack and discovers it has overflowed the stack and dies.You need to either operate on the file in chunks, memory-map it (
mmap()
), ormalloc()
storage.Also,
path
should be declaredconst char*
.