如何从链表中删除重复的奇数?

发布于 2024-10-16 20:15:12 字数 112 浏览 4 评论 0原文

要求/约束​​:

  1. 仅删除重复项
  2. 保留一个副本
  3. 列表最初未排序

这如何在 C 中实现? (算法和/或代码将不胜感激!)

Requirements/constraint:

  1. delete only duplicates
  2. keep one copy
  3. list is not initially sorted

How can this be implemented in C?
(An algorithm and/or code would be greatly appreciated!)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

久伴你 2024-10-23 20:15:13

如果列表很长并且您想要合理的性能并且可以分配额外的 log(n) 内存,则可以使用 qsort 或合并排序对 nlog(n) 进行排序:

http://swiss-knife.blogspot.com/2010/11/sorting.html

然后你可以删除重复项n (总数为:nlog(n) + n)

如果你的列表非常小,你可以像 jswolf19 建议的那样做,你会得到:n(n-1)/2 最差。

If the list is very long and you want reasonable performances and you are OK with allocating an extra log(n) of memory, you can sort in nlog(n) using qsort or merge sort:

http://swiss-knife.blogspot.com/2010/11/sorting.html

Then you can remove duplicates in n (the total is: nlog(n) + n)

If your list is very tiny, you can do like jswolf19 suggest, and you will get: n(n-1)/2 worst.

惯饮孤独 2024-10-23 20:15:13

有几种不同的方法可以检测/删除重复项:

嵌套循环

按顺序获取下一个值,然后扫描直到列表末尾以查看该值是否再次出现。这是 O(n2) ——尽管我相信界限可以更低? -- 但实际性能可能会更好,因为仅从 i 扫描到 end (而不是从 0end)已完成,并且可能会提前终止。除了一些变量之外,这不需要额外的数据。

(参见 Christoph 的回答,了解如何仅使用链表的遍历和破坏性的“附加”到新列表来完成此操作 - 例如,嵌套循环不必“感觉”像嵌套循环。)

和过滤

排序 列表(可以修改合并排序以在链接列表上工作),然后检测重复值(它们现在将并排)。如果排序良好,时间复杂度为 O(n*lg(n))。排序阶段通常是/可能是破坏性的(例如,您有“一份副本”),但它已被修改;-)

扫描并维护查找

扫描列表,并在扫描列表时将值添加到查找中。如果查找已包含所述值,则存在重复项!如果查找访问的时间复杂度为 O(1),则此方法的复杂度可以为 O(n)。通常,“散列/字典”或“集合”用作查找,但如果仅使用有限范围的积分,则数组将正常工作(例如索引就是值)。这需要额外的存储空间,但不需要“额外的副本”——至少在字面上是这样。

对于较小的 n 值,大 O 几乎毫无价值;-)

快乐编码。

There are several different ways of detecting/deleting duplicates:

Nested loops

Take the next value in sequence, then scan until the end of the list to see if this value occurs again. This is O(n2) -- although I believe the bounds can be argued lower? -- but the actual performance may be better as only scanning from i to end (not 0 to end) is done and it may terminate early. This does not require extra data aside from a few variables.

(See Christoph's answer as how this could be done just using a traversal of the linked list and destructive "appending" to a new list -- e.g. the nested loops don't have to "feel" like nested loops.)

Sort and filter

Sort the list (mergesort can be modified to work on linked lists) and then detect duplicate values (they will be side-by-side now). With a good sort this is O(n*lg(n)). The sorting phase usually is/can be destructive (e.g. you have "one copy") but it has been modified ;-)

Scan and maintain a look-up

Scan the list and as the list is scanned add the values to a lookup. If the lookup already contains said values then there is a duplicate! This approach can be O(n) if the lookup access is O(1). Generally a "hash/dictionary" or "set" is used as the lookup, but if only a limited range of integrals are used then an array will work just fine (e.g. the index is the value). This requires extra storage but no "extra copy" -- at least in the literal reading.

For small values of n, big-O is pretty much worthless ;-)

Happy coding.

爱你是孤单的心事 2024-10-23 20:15:13

我要么对

  • 列表进行合并排序,然后进行线性扫描以删除重复项,
  • 使用基于插入排序的算法,该算法在重新构建列表时已经删除了重复项。

前者会更快,后者更容易从头开始实现:只需构造一个新列表的方法是从旧列表中弹出元素,然后通过扫描将它们插入到新列表中,直到找到具有更大值的元素(在这种情况下,您将该元素插入到列表中)或相等的值(在这种情况下,您将丢弃该元素)元素)。

I'd either

  • mergesort the list followed by a linear scan to remove duplicates
  • use an insertion-sort based algorithm which already removes duplicates when re-building the list

The former will be faster, the latter is easier to implement from scratch: Just construct a new list by popping off elements from your old list and inserting them into the new one by scanning it until you hit an element of greater value (in which case you insert the element into the list) or equal value (in which case you discard the element).

那么,您可以先对列表进行排序,然后检查是否有重复项,或者您可以执行以下操作之一:

for i from 0 to list.length-1
    for j from i+1 to list.length-1
        if list[i] == list[j]
            //delete one of them
        fi
    loop
loop

Well, you can sort the list first and then check for duplicates, or you could do one of the following:

for i from 0 to list.length-1
    for j from i+1 to list.length-1
        if list[i] == list[j]
            //delete one of them
        fi
    loop
loop
司马昭之心 2024-10-23 20:15:13

这可能是最未经优化的废话,但它可能会起作用。

遍历列表,每次转到下一个对象时都持有指向前一个对象的指针。在迭代循环中遍历所有内容以检查是否有重复项。如果存在重复项,现在返回主迭代循环,获取下一个对象。将前一个对象指向下一个对象的指针设置为刚刚检索到的对象,然后跳出循环并重新启动整个过程,直到没有重复项为止。

This is probably the most unoptimized piece of crap, but it'll probably work.

Iterate through the list, holding a pointer to the previous object every time you go on to the next one. Inside your iteration loop iterate through it all to check for a duplicate. If there is a duplicate, now back in the main iteration loop, get the next object. Set the previous objects pointer to the next object to the object you just retrieved, then break out of the loop and restart the whole process till there are no duplicates.

偏爱自由 2024-10-23 20:15:13

您可以使用哈希表在线性时间内完成此操作。

您需要按顺序浏览列表。每次遇到奇数元素时,请在哈希表中查找它。如果该数字已在哈希表中,则将其从列表中删除,如果没有,则将其添加到哈希表中并继续。

基本上,这个想法是,对于您在列表中扫描的每个元素,您可以在恒定的时间内检查它是否是您所看到的前一个元素的重复项。这仅需要一次遍历列表,并且在最坏的情况下会占用线性量的内存(最坏的情况是列表中的每个元素都是唯一的奇数,因此哈希表与列表一样长)。

You can do this in linear time using a hash table.

You'd want to scan through the list sequentially. Each time you encounter an odd numbered element, look it up in your hash table. If that number is already in the hash table, delete it from the list, if not add it to the hash table and continue.

Basically the idea is that for each element you scan in the list, you are able to check in constant time whether it is a duplicate of a previous element that you've seen. This takes only a single pass through your list and will take at worst a linear amount of memory (worst case is that every element of the list is a unique odd number, thus your hash table is as long as your list).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文