C malloc/free + 性能表现

发布于 2024-07-25 01:19:46 字数 918 浏览 7 评论 0原文

当我循环遍历文件 A 中的行时,我正在解析该行并将每个字符串 (char*) 放入 char** 中。

在一行的末尾,我运行一个程序,其中包括打开文件 B,使用 fgetsfseekfgetc 来抓取其中的字符那个文件。 然后我关闭文件 B。

我为每一行重复重新打开和重新关闭文件 B。

我想知道的是:

  1. 使用mallocfree是否会对性能产生重大影响,因此我应该使用myArray[之类的静态内容NUM_STRINGS][MAX_STRING_WIDTH] 而不是动态 char** myArray

  2. 打开和关闭文件 B(概念上,数千次)是否会产生显着的性能开销? 如果我的文件 A 已排序,是否可以使用 fseek 在文件 B 中“向后”移动,以重置我之前在文件 B 中的位置?

编辑 事实证明,双重方法大大减少了运行时间:

  1. 我的文件 B 实际上是二十四个文件之一。 我不是打开同一个文件 B1 一千次,然后打开 B2 一千次,等等。我打开文件 B1 一次,关闭它,打开 B2 一次,关闭它,等等。这减少了数千次 fopen 和 fclose 操作大约为 24。

  2. 我使用 rewind() 重置文件指针。

这使得速度提高了大约 60 倍,这已经足够了。 感谢您向我指出 rewind()

As I loop through lines in file A, I am parsing the line and putting each string (char*) into a char**.

At the end of a line, I then run a procedure that consists of opening file B, using fgets, fseek and fgetc to grab characters from that file. I then close file B.

I repeat reopening and reclosing file B for each line.

What I would like to know is:

  1. Is there a significant performance hit from using malloc and free, such that I should use something static like myArray[NUM_STRINGS][MAX_STRING_WIDTH] instead of a dynamic char** myArray?

  2. Is there significant performance overhead from opening and closing file B (conceptually, many thousands of times)? If my file A is sorted, is there a way for me to use fseek to move "backwards" in file B, to reset where I was previously located in file B?

EDIT Turns out that a two-fold approach greatly reduced the runtime:

  1. My file B is actually one of twenty-four files. Instead of opening up the same file B1 a thousand times, and then B2 a thousand times, etc. I open up file B1 once, close it, B2 once, close it, etc. This reduces many thousands of fopen and fclose operations to roughly 24.

  2. I used rewind() to reset the file pointer.

This yielded a roughly 60-fold speed improvement, which is more than sufficient. Thanks for pointing me to rewind().

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

秋日私语 2024-08-01 01:19:46

如果动态数组随时间增长,某些 realloc 会产生复制成本。 如果你使用“总是双倍”启发式,这会被摊销到 O(n),所以这并不可怕。 如果您提前知道大小,堆栈分配的数组仍然会更快。

对于第二个问题,请阅读有关倒带的内容。 它必须比始终打开和关闭更快,并且可以让您减少资源管理。

If your dynamic array grows in time, there is a copy cost on some reallocs. If you use the "always double" heuristic, this is amortized to O(n), so it is not horrible. If you know the size ahead of time, a stack allocated array will still be faster.

For the second question read about rewind. It has got to be faster than opening and closing all the time, and lets you do less resource management.

暮光沉寂 2024-08-01 01:19:46

我想知道的是:

  • 你的代码工作正常吗?
  • 它的运行速度是否足以满足您的目的?

如果这两个答案都是“是”,则不要更改任何内容。

What I would like to know is:

  • does your code work correctly?
  • is it running fast enough for your purpose?

If the answer both of these is "yes", don't change anything.

红颜悴 2024-08-01 01:19:46

打开和关闭的开销可变,具体取决于其他程序是否竞争该资源。

首先测量文件大小,然后使用它提前计算数组大小以进行一次大堆分配。

您不会立即获得多维数组,但只需进行一些指针算术即可。

您能否不在另一个文件中缓存位置信息,然后使用先前的查找索引作为偏移量,而不是打开和关闭它? 确实取决于确切的逻辑。

Opening and closing has a variable overhead depending on if other programs are competitng for that resource.

measure the file size first and then use that to calculate the array size in advance to do one big heap allocation.

You won't get a multi-dimensional array right off, but a bit of pointer arithmetic and you are there.

Can you not cache positional information in the other file and then, rather than opening and closing it, use previous seek indexes as an offset? Depends on the exact logic really.

等风也等你 2024-08-01 01:19:46
  1. 如果文件很大,磁盘 I/O 将比内存管理昂贵得多。 在分析之前担心 malloc/free 性能表明它是一个瓶颈,这是不成熟的优化。

  2. 在您的程序中,频繁打开/关闭的开销可能很大,但实际的 I/O 可能会更昂贵,除非文件很小,在这种情况下,关闭和关闭之间的缓冲区会丢失open 可能会导致额外的磁盘 I/O。 是的,您可以使用 ftell() 获取文件中的当前位置,然后使用 SEEK_SET fseek 来获取该位置。

  1. If your files are large, disk I/O will be far more expensive than memory management. Worrying about malloc/free performance before profiling indicates that it is a bottleneck is premature optimization.

  2. It is possible that the overhead from frequent open/close is significant in your program, but again the actual I/O is likely to be more expensive, unless the files are small, in which case the loss of buffers between close and open can potentially cause extra disk I/O. And yes you can use ftell() to get the current position in a file then fseek with SEEK_SET to get to that.

满天都是小星星 2024-08-01 01:19:46

使用动态内存总是会影响性能。 使用静态缓冲区将提供速度提升。

重新打开文件也会对性能造成影响。 您可以使用 fseek(pos, SEEK_SET) 将文件指针设置为文件中的任意位置,或使用 fseek(offset, SEEK_CUR) 进行相对移动。

显着的性能影响是相对的,您必须确定这对自己意味着什么。

There is always a performance hit with using dynamic memory. Using a static buffer will provide a speed boost.

There is also going to be a performance hit with reopening a file. You can use fseek(pos, SEEK_SET) to set the file pointer to any position in the file or fseek(offset, SEEK_CUR) to do a relative move.

The significant performance hit is relative, and you will have to determine what that means for yourself.

梦境 2024-08-01 01:19:46
  1. 我认为最好分配
    您需要的实际空间,以及
    开销可能不会
    重要的。 这样就避免了两者
    浪费空间和堆栈溢出

  2. 是的。 虽然IO被缓存了,
    你正在进行不必要的系统调用
    (打开和关闭)。 使用 fseek 与
    可能是 SEEK_CURSEEK_SET

  1. I think it's better to allocate the
    actual space you need, and the
    overhead will probably not be
    significant. This avoids both
    wasting space and stack overflows

  2. Yes. Though the IO is cached,
    you're making unnecessary syscalls
    (open and close). Use fseek with
    probably SEEK_CUR or SEEK_SET.

少女情怀诗 2024-08-01 01:19:46

在这两种情况下,都会对性能产生一些影响,但其重要性取决于文件的大小和程序运行的上下文。

  1. 如果您确实知道最大字符串数和最大字符串数,宽度,这会快很多(但是如果您使用的值小于“最大值”,您可能会浪费大量内存)。 最好的办法是像 C++ 中的许多动态数组实现那样:每当您必须重新分配 myArray 时,分配所需空间的两倍,并且只有在空间用完后才再次重新分配。 这具有 O(log n) 性能成本。

  2. 这可能会对性能造成很大影响。 我强烈建议使用 fseek,尽管详细信息取决于您的算法。

In both cases, there is some performance hit, but the significance will depend on the size of the files and the context your program runs in.

  1. If you actually know the max number of strings and max width, this will be a lot faster (but you may waste a lot of memory if you use less than the "max"). The happy medium is to do what a lot of dynamic array implementations in C++ do: whenever you have to realloc myArray, alloc twice as much space as you need, and only realloc again once you've run out of space. This has O(log n) performance cost.

  2. This may be a big performance hit. I strongly recommend using fseek, though the details will depend on your algorithm.

只等公子 2024-08-01 01:19:46

我经常发现 malloc 附带的直接内存管理和内存上的低级 C 处理程序所带来的性能开销超过了性能开销。 除非内存的这些区域将在比接触该内存更长的摊余时间内保持静态且不受影响,否则坚持使用静态数组可能更有利。 最后,这取决于你。

I often find the performance overhead to be outweighed by the direct memory management that comes with malloc and those low-level C handlers on memory. Unless these areas of memory are going to remain static and untouched for an amount of time that is in amortized time greater than touching this memory, it may be more beneficial to stick with the static array. In the end, it's up to you.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文