程序在测试期间被终止
当我运行程序时,我收到消息 Killed
以及有关脚本的一些信息。在对这个问题进行了一些研究之后,我发现我没有删除动态分配的变量(愚蠢的我!)。然而,现在,我觉得我已经解决了这个问题,但当我使用 Linux 时,我仍然在终端中收到 Killed
消息。
//does the of the manipulation of the load factor.
for (int tableSize = fileLength; tableSize < fileLength * 2; tableSize = tableSize + 500)
{
//creates hash tables to be reused for each of the trials.
for(int fileNum = 0; fileNum < NUMTIMES; fileNum++)
{
Array_HashTable* linear_div_hash = new Array_HashTable(tableSize);
LinkedList_HashTable *chain_div_hash = new LinkedList_HashTable(tableSize);
Array_HashTable *doubleHash = new Array_HashTable(tableSize);
LinkedList_HashTable *mult_hash = new LinkedList_HashTable(tableSize);
//Does the hashing for each of the files created.
for (int index = 0; index < fileLength; index++)
{
linear_div_hash -> Linear_ProbeDH(read[fileNum][index]);
chain_div_hash -> Division_Hash(read[fileNum][index]);
doubleHash -> Double_Hash(read[fileNum][index]);
mult_hash -> Mulitplication_Hash(read[fileNum][index]);
}//ends the index for loop.
optimalOutput("VariableSizeLinearCollisionData", fileLength, tableSize, linear_div_hash -> getCollisions(), fileAppendage);
optimalOutput("VariableSizeDoubleCollisionData", fileLength, tableSize, doubleHash -> getCollisions(), fileAppendage);
optimalOutput("VariableSizeDivisionChainingCollisionData", fileLength, tableSize, chain_div_hash -> getCollisions(), fileAppendage);
optimalOutput("VariableSizeMultiplicationChainingCollisionData", fileLength, tableSize, mult_hash -> getCollisions(),fileAppendage);
linear_div_hash -> EndArray_HashTable();
chain_div_hash-> EndLinkedList_HashTable();
doubleHash -> EndArray_HashTable();
mult_hash-> EndLinkedList_HashTable();
delete linear_div_hash;
delete chain_div_hash ;
delete doubleHash ;
delete mult_hash ;
}//ends the fileNum for loop
}//ends the parent for loop with the size as the variable.
基本上代码是这样工作的,第一个 for 循环控制哈希表的大小。第二个循环控制将使用哪个文件的数据进行哈希处理。并为此实例化一个哈希表对象。最后一个循环调用哈希函数。然后使用输出函数将统计数据输出到文件。然后,我使用与析构函数类似的函数从类中删除动态变量。我无法使用析构函数来执行此操作,因为它会给我带来错误。然后我删除这些对象。
我能做些什么?
When I run my program I get the message Killed
with some information about the script. After doing some research on the problem, I found out that I wasn't deleting my dynamically allocated variables (stupid me!). However, Now, I feel like I have taken care of that problem but I am still getting the Killed
message in the terminal when I use Linux.
//does the of the manipulation of the load factor.
for (int tableSize = fileLength; tableSize < fileLength * 2; tableSize = tableSize + 500)
{
//creates hash tables to be reused for each of the trials.
for(int fileNum = 0; fileNum < NUMTIMES; fileNum++)
{
Array_HashTable* linear_div_hash = new Array_HashTable(tableSize);
LinkedList_HashTable *chain_div_hash = new LinkedList_HashTable(tableSize);
Array_HashTable *doubleHash = new Array_HashTable(tableSize);
LinkedList_HashTable *mult_hash = new LinkedList_HashTable(tableSize);
//Does the hashing for each of the files created.
for (int index = 0; index < fileLength; index++)
{
linear_div_hash -> Linear_ProbeDH(read[fileNum][index]);
chain_div_hash -> Division_Hash(read[fileNum][index]);
doubleHash -> Double_Hash(read[fileNum][index]);
mult_hash -> Mulitplication_Hash(read[fileNum][index]);
}//ends the index for loop.
optimalOutput("VariableSizeLinearCollisionData", fileLength, tableSize, linear_div_hash -> getCollisions(), fileAppendage);
optimalOutput("VariableSizeDoubleCollisionData", fileLength, tableSize, doubleHash -> getCollisions(), fileAppendage);
optimalOutput("VariableSizeDivisionChainingCollisionData", fileLength, tableSize, chain_div_hash -> getCollisions(), fileAppendage);
optimalOutput("VariableSizeMultiplicationChainingCollisionData", fileLength, tableSize, mult_hash -> getCollisions(),fileAppendage);
linear_div_hash -> EndArray_HashTable();
chain_div_hash-> EndLinkedList_HashTable();
doubleHash -> EndArray_HashTable();
mult_hash-> EndLinkedList_HashTable();
delete linear_div_hash;
delete chain_div_hash ;
delete doubleHash ;
delete mult_hash ;
}//ends the fileNum for loop
}//ends the parent for loop with the size as the variable.
Basically the code works like this, the first for loop controls the size of the hash table. The second loop controls which file's data will be used to be hashed. And a hash table object is instantiated for that. The last loop calls the hash functions. Then the stats are outputted to a file using the output function. Then I use a similar function to a destructor to delete the dynamic variables from within my class. I can't use a destructor to do this because it was giving me errors for that. Then I delete the objects.
What can I do?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
如果你在Linux上运行,你可以使用valgrind,
它会很慢,但会报告许多内存问题。如果您仍然找不到它,您可以使用 Massif 对其进行堆分析。
这将生成一个及时的内存使用情况图表以及多个快照时刻的最大内存分配(包括分配位置的精确堆栈跟踪)。
哦,使用
gcc -g
构建调试信息If you are running on linux, you could use valgrind with this
It will slowly but report many memory problems. If you still don't find it, you can heap profile it using massif
This will generate a graph of memory usage in time and the largest allocations of memory at several snapshot moments (including precise stack traces of where they were allocated).
Oh, build using
gcc -g
for debug info在展示的代码中,您对两种类型的对象调用
new
,然后delete
四次。如果 Array_HashTable 和LinkedList_HashTable 的析构函数正确释放其对象分配的任何内存,那么看起来就不错。
如果您仍然从这段代码中泄漏内存,那么这些对象将是我的第一个怀疑对象。
In the exhibited code you are calling
new
and thendelete
four times on objects of two types. That looks pretty good if the destructors ofArray_HashTable
andLinkedList_HashTable
are correctly freeing any memory allocated by their objects.If you are still leaking memory from this code, those objects would be my first suspect.