对链表进行排序最快的算法是什么?
我很好奇 O(n log n) 是否是链表所能做到的最好的。
I'm curious if O(n log n) is the best a linked list can do.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
我很好奇 O(n log n) 是否是链表所能做到的最好的。
I'm curious if O(n log n) is the best a linked list can do.
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
接受
或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
发布评论
评论(13)
可以合理地预期您在运行时间中不可能做得比 O(N log N) 更好。
然而,有趣的部分是研究是否可以就地对其进行排序,稳定,其最坏情况的行为等等。
以 Putty 闻名的 Simon Tatham 解释了如何通过合并对链表进行排序排序。他最后发表了以下评论:
还有一个 C 语言的示例实现,适用于单链表和双向链表。
正如@Jørgen Fogh 在下面提到的,大 O 表示法可能会隐藏一些常数因素,这些因素可能会导致一种算法由于内存局部性、项目数量较少等而表现更好。
It is reasonable to expect that you cannot do any better than O(N log N) in running time.
However, the interesting part is to investigate whether you can sort it in-place, stably, its worst-case behavior and so on.
Simon Tatham, of Putty fame, explains how to sort a linked list with merge sort. He concludes with the following comments:
There is also an example implementation in C that work for both singly and doubly linked lists.
As @Jørgen Fogh mentions below, big-O notation may hide some constant factors that can cause one algorithm to perform better because of memory locality, because of a low number of items, etc.
根据多种因素,将列表复制到数组,然后使用 Quicksort 实际上可能会更快。
这可能更快的原因是数组有更好的
缓存性能优于链表。如果链表中的节点分散在内存中,则
可能会在各处产生缓存未命中。话又说回来,如果数组很大,无论如何你都会遇到缓存未命中的情况。
合并排序并行性更好,因此如果您想要的话,它可能是更好的选择。如果直接在链表上执行的话也会快很多。
由于这两种算法的运行时间复杂度为 O(n * log n),因此做出明智的决定需要在您想要运行它们的机器上对它们进行分析。
更新
我决定测试我的假设并编写一个 C 程序来测量对整数链接列表进行排序所需的时间(使用
clock()
)。我尝试使用一个链表,其中每个节点都使用malloc()
进行分配,以及一个链表,其中节点在数组中线性布局,因此缓存性能会更好。我将它们与内置的 qsort 进行了比较,其中包括将碎片列表中的所有内容复制到数组中,然后再次将结果复制回来。每个算法都在相同的 10 个数据集上运行,并对结果进行平均。结果如下:
结论
至少在我的机器上,复制到数组中以提高缓存性能是非常值得的,因为在现实生活中很少有完全打包的链表。需要注意的是,我的机器有2.8GHz Phenom II,但RAM只有0.6GHz,所以缓存非常重要。
Depending on a number of factors, it may actually be faster to copy the list to an array and then use a Quicksort.
The reason this might be faster is that an array has much better
cache performance than a linked list. If the nodes in the list are dispersed in memory, you
may be generating cache misses all over the place. Then again, if the array is large you will get cache misses anyway.
Mergesort parallelises better, so it may be a better choice if that is what you want. It is also much faster if you perform it directly on the linked list.
Since both algorithms run in O(n * log n), making an informed decision would involve profiling them both on the machine you would like to run them on.
Update
I decided to test my hypothesis and wrote a C-program which measured the time (using
clock()
) taken to sort a linked list of ints. I tried with a linked list where each node was allocated withmalloc()
and a linked list where the nodes were laid out linearly in an array, so the cache performance would be better. I compared these with the built-in qsort, which included copying everything from a fragmented list to an array and copying the result back again. Each algorithm was run on the same 10 data sets and the results were averaged.These are the results:
Conclusion
At least on my machine, copying into an array is well worth it to improve the cache performance, since you rarely have a completely packed linked list in real life. It should be noted that my machine has a 2.8GHz Phenom II, but only 0.6GHz RAM, so the cache is very important.
这是关于这个主题的一篇很好的小论文。他的经验结论是树排序最好,其次是快速排序和合并排序。沉淀排序、冒泡排序、选择排序表现很差。
链表排序算法的比较研究
作者:Ching-Kuang Shene
http://citeseerx.ist.psu。 edu/viewdoc/summary?doi=10.1.1.31.9981
This is a nice little paper on this topic. His empirical conclusion is that Treesort is best, followed by Quicksort and Mergesort. Sediment sort, bubble sort, selection sort perform very badly.
A COMPARATIVE STUDY OF LINKED LIST SORTING ALGORITHMS
by Ching-Kuang Shene
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.9981
比较排序(即基于比较元素的排序)不可能比
n log n
更快。底层数据结构是什么并不重要。请参阅维基百科。利用列表中存在大量相同元素的其他类型的排序(例如计数排序)或列表中元素的某些预期分布,速度更快,但我想不出有任何一种效果特别好在链接列表上。
Comparison sorts (i.e. ones based on comparing elements) cannot possibly be faster than
n log n
. It doesn't matter what the underlying data structure is. See Wikipedia.Other kinds of sort that take advantage of there being lots of identical elements in the list (such as the counting sort), or some expected distribution of elements in the list, are faster, though I can't think of any that work particularly well on a linked list.
正如多次提到的,一般数据基于比较的排序的下限将是 O(n log n)。简单地总结一下这些论点,有n!可以用不同的方式对列表进行排序。任何类型的具有 n! 的比较树(O(n^n))可能的最终排序将至少需要 log(n!) 作为其高度:这给你一个 O(log(n^n)) 下界,即 O(n记录n)。
因此,对于链表上的一般数据,适用于可以比较两个对象的任何数据的最佳排序将是 O(n log n)。但是,如果您要处理的事情范围更有限,则可以缩短所需的时间(至少与 n 成正比)。例如,如果您使用不大于某个值的整数,则可以使用计数排序或基数排序,因为它们使用您要排序的特定对象来降低复杂性,比例为 n 。但要小心,这些会增加一些您可能没有考虑到的复杂性(例如,计数排序和基数排序都会添加基于您要排序的数字大小的因素,O(n+k )其中 k 是例如计数排序的最大数字的大小)。
另外,如果您碰巧拥有具有完美哈希的对象(或者至少是一个以不同方式映射所有值的哈希),您可以尝试在其哈希函数上使用计数或基数排序。
As stated many times, the lower bound on comparison based sorting for general data is going to be O(n log n). To briefly resummarize these arguments, there are n! different ways a list can be sorted. Any sort of comparison tree that has n! (which is in O(n^n)) possible final sorts is going to need at least log(n!) as its height: this gives you a O(log(n^n)) lower bound, which is O(n log n).
So, for general data on a linked list, the best possible sort that will work on any data that can compare two objects is going to be O(n log n). However, if you have a more limited domain of things to work in, you can improve the time it takes (at least proportional to n). For instance, if you are working with integers no larger than some value, you could use Counting Sort or Radix Sort, as these use the specific objects you're sorting to reduce the complexity with proportion to n. Be careful, though, these add some other things to the complexity that you may not consider (for instance, Counting Sort and Radix sort both add in factors that are based on the size of the numbers you're sorting, O(n+k) where k is the size of largest number for Counting Sort, for instance).
Also, if you happen to have objects that have a perfect hash (or at least a hash that maps all values differently), you could try using a counting or radix sort on their hash functions.
不是对您问题的直接答案,但如果您使用跳过列表,它已经排序并且具有 O(log N) 搜索时间。
Not a direct answer to your question, but if you use a Skip List, it is already sorted and has O(log N) search time.
基数排序特别适合链表,因为很容易制作一个头表对应于数字的每个可能值的指针。
A Radix sort is particularly suited to a linked list, since it's easy to make a table of head pointers corresponding to each possible value of a digit.
归并排序不需要 O(1) 访问,并且时间复杂度为 O ( n ln n )。没有任何已知的通用数据排序算法比 O ( n ln n ) 更好。
特殊的数据算法,例如基数排序(限制数据大小)或直方图排序(计算离散数据)可以对具有较低增长函数的链表进行排序,只要您使用具有 O(1) 访问权限的不同结构作为临时存储。
另一类特殊数据是对 k 个无序元素的几乎排序列表进行比较排序。这可以通过 O ( kn ) 次操作来排序。
将列表复制到数组并返回的时间复杂度为 O(N),因此如果空间不是问题,可以使用任何排序算法。
例如,给定一个包含
uint_8
的链表,此代码将使用直方图排序在 O(N) 时间内对其进行排序:Merge sort doesn't require O(1) access and is O ( n ln n ). No known algorithms for sorting general data are better than O ( n ln n ).
The special data algorithms such as radix sort ( limits size of data ) or histogram sort ( counts discrete data ) could sort a linked list with a lower growth function, as long as you use a different structure with O(1) access as temporary storage.
Another class of special data is a comparison sort of an almost sorted list with k elements out of order. This can be sorted in O ( kn ) operations.
Copying the list to an array and back would be O(N), so any sorting algorithm can be used if space is not an issue.
For example, given a linked list containing
uint_8
, this code will sort it in O(N) time using a histogram sort:据我所知,最好的排序算法是 O(n*log n),无论容器是什么 - 已经证明,广义上的排序(合并排序/快速排序等风格)不能更低。使用链表不会给你带来更好的运行时间。
唯一一种以 O(n) 运行的算法是“hack”算法,它依赖于计数值而不是实际排序。
As I know, the best sorting algorithm is O(n*log n), whatever the container - it's been proved that sorting in the broad sense of the word (mergesort/quicksort etc style) can't go lower. Using a linked list will not give you a better run time.
The only one algorithm which runs in O(n) is a "hack" algorithm which relies on counting values rather than actually sorting.
这是一个实现,它仅遍历列表一次,收集运行,然后以相同的方式安排合并归并排序就是这样做的。
复杂度为 O(n log m),其中 n 是项目数,m 是运行次数。最好的情况是 O(n)(如果数据已经排序),最坏的情况是 O(n log n),正如预期的那样。
需要O(log m)临时内存;排序是在列表中就地完成的。
(下面更新。评论者一提出了一个很好的观点,我应该在这里描述它)
该算法的要点是:
累积运行不需要太多解释,但最好借此机会累积上升运行和下降运行(相反) )。在这里,它在前面添加了小于运行头的项目,并在后面添加了大于或等于运行末尾的项目。 (请注意,前置应使用严格的小于来保持排序稳定性。)
最简单的方法是在此处粘贴合并代码:
考虑对列表进行排序 (dagibecfjh)(忽略运行)。堆栈状态按如下方式进行:
然后,最后合并所有这些列表。
请注意,stack[i] 中的项目(运行)数量为零或 2^i,并且堆栈大小以 1+log2(nruns) 为界。每个元素在每个堆栈级别合并一次,因此比较时间为 O(n log m)。这里与 Timsort 有一点相似之处,尽管 Timsort 使用类似斐波那契数列的东西来维护其堆栈,其中斐波那契数列使用 2 的幂。
累积运行利用任何已排序的数据,因此对于已排序的列表(一次运行),最佳情况复杂度为 O(n)。由于我们同时累加升序和降序游程,因此游程的长度将始终至少为 2。(这会将最大堆栈深度减少至少 1,从而支付首先查找游程的成本。)最坏情况的复杂度是正如预期的那样,对于高度随机的数据,O(n log n)。
(嗯...第二次更新。)
或者只需查看维基百科上的自下而上归并排序。
Here's an implementation that traverses the list just once, collecting runs, then schedules the merges in the same way that mergesort does.
Complexity is O(n log m) where n is the number of items and m is the number of runs. Best case is O(n) (if the data is already sorted) and worst case is O(n log n) as expected.
It requires O(log m) temporary memory; the sort is done in-place on the lists.
(updated below. commenter one makes a good point that I should describe it here)
The gist of the algorithm is:
Accumulating runs doesn't require much explanation, but it's good to take the opportunity to accumulate both ascending runs and descending runs (reversed). Here it prepends items smaller than the head of the run and appends items greater than or equal to the end of the run. (Note that prepending should use strict less-than to preserve sort stability.)
It's easiest to just paste the merging code here:
Consider sorting the list (d a g i b e c f j h) (ignoring runs). The stack states proceed as follows:
Then, finally, merge all these lists.
Note that the number of items (runs) at stack[i] is either zero or 2^i and the stack size is bounded by 1+log2(nruns). Each element is merged once per stack level, hence O(n log m) comparisons. There's a passing similarity to Timsort here, though Timsort maintains its stack using something like a Fibonacci sequence where this uses powers of two.
Accumulating runs takes advantage of any already sorted data so that best case complexity is O(n) for an already sorted list (one run). Since we're accumulating both ascending and descending runs, runs will always be at least length 2. (This reduces the maximum stack depth by at least one, paying for the cost of finding the runs in the first place.) Worst case complexity is O(n log n), as expected, for data that is highly randomized.
(Um... Second update.)
Or just see wikipedia on bottom-up mergesort.
您可以将其复制到数组中,然后对其进行排序。
复制到数组 O(n),
排序 O(nlgn)(如果使用合并排序等快速算法),
如果需要,复制回链表 O(n),
所以它会是O(nlgn)。
请注意,如果您不知道链接列表中元素的数量,您将不知道数组的大小。例如,如果您使用 java 进行编码,则可以使用 Arraylist。
You can copy it into an array and then sort it.
Copying into array O(n),
sorting O(nlgn) (if you use a fast algorithm like merge sort ),
copying back to linked list O(n) if necessary,
so it is gonna be O(nlgn).
note that if you do not know the number of elements in the linked list you won't know the size of array. If you are coding in java you can use an Arraylist for example.
问题是 LeetCode #148,所有主要语言都提供了大量解决方案。我的如下,但我想知道时间复杂度。为了找到中间元素,我们每次都会遍历整个列表。第一次迭代
n
个元素,第二次迭代2 * n/2
元素,依此类推。似乎是O(n^2)
时间。The question is LeetCode #148, and there are plenty of solutions offered in all major languages. Mine is as follows, but I'm wondering about the time complexity. In order to find the middle element, we traverse the complete list each time. First time
n
elements are iterated over, second time2 * n/2
elements are iterated over, so on and so forth. It seems to beO(n^2)
time.合并排序是你在这里能做的最好的事情。
Mergesort is the best you can do here.