程序在测试期间被终止

发布于 2024-10-29 08:57:06 字数 2532 浏览 3 评论 0原文

当我运行程序时,我收到消息 Killed 以及有关脚本的一些信息。在对这个问题进行了一些研究之后,我发现我没有删除动态分配的变量(愚蠢的我!)。然而,现在,我觉得我已经解决了这个问题,但当我使用 Linux 时,我仍然在终端中收到 Killed 消息。

    //does the of the manipulation of the load factor.
    for (int tableSize = fileLength; tableSize < fileLength * 2; tableSize = tableSize + 500)
    {   

        //creates hash tables to be reused for each of the trials.
        for(int fileNum = 0; fileNum < NUMTIMES; fileNum++)
        {   

            Array_HashTable* linear_div_hash = new Array_HashTable(tableSize);
            LinkedList_HashTable *chain_div_hash = new LinkedList_HashTable(tableSize);

            Array_HashTable *doubleHash = new Array_HashTable(tableSize);        
            LinkedList_HashTable *mult_hash = new LinkedList_HashTable(tableSize);
            //Does the hashing for each of the files created.
            for (int index = 0; index < fileLength; index++)        
            {
                linear_div_hash -> Linear_ProbeDH(read[fileNum][index]);
                chain_div_hash ->  Division_Hash(read[fileNum][index]);
                doubleHash -> Double_Hash(read[fileNum][index]);
                mult_hash -> Mulitplication_Hash(read[fileNum][index]);
            }//ends the index for loop.

            optimalOutput("VariableSizeLinearCollisionData", fileLength, tableSize, linear_div_hash -> getCollisions(), fileAppendage);
            optimalOutput("VariableSizeDoubleCollisionData", fileLength, tableSize, doubleHash -> getCollisions(), fileAppendage);
            optimalOutput("VariableSizeDivisionChainingCollisionData", fileLength, tableSize, chain_div_hash -> getCollisions(), fileAppendage);
            optimalOutput("VariableSizeMultiplicationChainingCollisionData", fileLength, tableSize, mult_hash -> getCollisions(),fileAppendage);    

            linear_div_hash -> EndArray_HashTable(); 
            chain_div_hash-> EndLinkedList_HashTable();
            doubleHash -> EndArray_HashTable();
            mult_hash-> EndLinkedList_HashTable();

            delete  linear_div_hash; 
            delete  chain_div_hash ;
            delete  doubleHash ;
            delete  mult_hash ;
        }//ends the fileNum for loop
    }//ends the parent for loop with the size as the variable.

基本上代码是这样工作的,第一个 for 循环控制哈希表的大小。第二个循环控制将使用哪个文件的数据进行哈希处理。并为此实例化一个哈希表对象。最后一个循环调用哈希函数。然后使用输出函数将统计数据输出到文件。然后,我使用与析构函数类似的函数从类中删除动态变量。我无法使用析构函数来执行此操作,因为它会给我带来错误。然后我删除这些对象。

我能做些什么?

When I run my program I get the message Killed with some information about the script. After doing some research on the problem, I found out that I wasn't deleting my dynamically allocated variables (stupid me!). However, Now, I feel like I have taken care of that problem but I am still getting the Killed message in the terminal when I use Linux.

    //does the of the manipulation of the load factor.
    for (int tableSize = fileLength; tableSize < fileLength * 2; tableSize = tableSize + 500)
    {   

        //creates hash tables to be reused for each of the trials.
        for(int fileNum = 0; fileNum < NUMTIMES; fileNum++)
        {   

            Array_HashTable* linear_div_hash = new Array_HashTable(tableSize);
            LinkedList_HashTable *chain_div_hash = new LinkedList_HashTable(tableSize);

            Array_HashTable *doubleHash = new Array_HashTable(tableSize);        
            LinkedList_HashTable *mult_hash = new LinkedList_HashTable(tableSize);
            //Does the hashing for each of the files created.
            for (int index = 0; index < fileLength; index++)        
            {
                linear_div_hash -> Linear_ProbeDH(read[fileNum][index]);
                chain_div_hash ->  Division_Hash(read[fileNum][index]);
                doubleHash -> Double_Hash(read[fileNum][index]);
                mult_hash -> Mulitplication_Hash(read[fileNum][index]);
            }//ends the index for loop.

            optimalOutput("VariableSizeLinearCollisionData", fileLength, tableSize, linear_div_hash -> getCollisions(), fileAppendage);
            optimalOutput("VariableSizeDoubleCollisionData", fileLength, tableSize, doubleHash -> getCollisions(), fileAppendage);
            optimalOutput("VariableSizeDivisionChainingCollisionData", fileLength, tableSize, chain_div_hash -> getCollisions(), fileAppendage);
            optimalOutput("VariableSizeMultiplicationChainingCollisionData", fileLength, tableSize, mult_hash -> getCollisions(),fileAppendage);    

            linear_div_hash -> EndArray_HashTable(); 
            chain_div_hash-> EndLinkedList_HashTable();
            doubleHash -> EndArray_HashTable();
            mult_hash-> EndLinkedList_HashTable();

            delete  linear_div_hash; 
            delete  chain_div_hash ;
            delete  doubleHash ;
            delete  mult_hash ;
        }//ends the fileNum for loop
    }//ends the parent for loop with the size as the variable.

Basically the code works like this, the first for loop controls the size of the hash table. The second loop controls which file's data will be used to be hashed. And a hash table object is instantiated for that. The last loop calls the hash functions. Then the stats are outputted to a file using the output function. Then I use a similar function to a destructor to delete the dynamic variables from within my class. I can't use a destructor to do this because it was giving me errors for that. Then I delete the objects.

What can I do?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

栩栩如生 2024-11-05 08:57:06

如果你在Linux上运行,你可以使用valgrind,

valgrind myprogram

它会很慢,但会报告许多内存问题。如果您仍然找不到它,您可以使用 Massif 对其进行堆分析。

valgrind --tool=massif myprogram
ms_print <profile_output_file>

这将生成一个及时的内存使用情况图表以及多个快照时刻的最大内存分配(包括分配位置的精确堆栈跟踪)。

哦,使用 gcc -g 构建调试信息

If you are running on linux, you could use valgrind with this

valgrind myprogram

It will slowly but report many memory problems. If you still don't find it, you can heap profile it using massif

valgrind --tool=massif myprogram
ms_print <profile_output_file>

This will generate a graph of memory usage in time and the largest allocations of memory at several snapshot moments (including precise stack traces of where they were allocated).

Oh, build using gcc -g for debug info

森末i 2024-11-05 08:57:06

在展示的代码中,您对两种类型的对象调用new,然后delete四次。如果 Array_HashTable 和 LinkedList_HashTable 的析构函数正确释放其对象分配的任何内存,那么看起来就不错。

如果您仍然从这段代码中泄漏内存,那么这些对象将是我的第一个怀疑对象。

In the exhibited code you are calling new and then delete four times on objects of two types. That looks pretty good if the destructors of Array_HashTable and LinkedList_HashTable are correctly freeing any memory allocated by their objects.

If you are still leaking memory from this code, those objects would be my first suspect.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文