摊销真的是可取的吗?

发布于 2024-08-22 13:32:45 字数 110 浏览 12 评论 0原文

例如,假设我有一个 O(n) 算法和一个摊销 O(n) 算法。可以公平地说,在严格的大哦术语中,非摊销算法将始终与摊销算法一样快或更快吗?或者是否有任何理由更喜欢摊销版本(忽略代码简单性或易于实现等因素)?

For instance, suppose I have an algorithm that's O(n) and an algorithm that's an amortized O(n). Is it fair to say that in strictly big oh terms, the non-amortized algorithm will always be as fast or faster than the amortized algorithm? Or is there ever any reason to prefer the amortized version (ignoring things like code simplicity or ease of implementation)?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

紫瑟鸿黎 2024-08-29 13:32:45
  1. 大 O 表示法仅告诉您代码的扩展方式,而不是有限 N 的速度。
  2. 如果您关心最坏情况的性能或延迟(例如实时或交互式系统),摊销和非摊销之间的差异就很重要)。但是,如果您只关心平均吞吐量,那么它们对于所有实际目的都是相同的。即使在科学计算和大规模数据挖掘等一些对性能非常关键的情况下,摊销分析也足够好。
  1. Big O notation only tells you how your code scales, not how fast it will be for finite N.
  2. The difference between amortized and non-amortized is important if you care about worst-case performance or latency (such as real-time or interactive systems). If you only care about average throughput, though, they're for all practical purposes the same. Amortized analysis is good enough even in some very performance-critical situations like scientific computing and large-scale data mining.
舟遥客 2024-08-29 13:32:45

或者是否有任何理由更喜欢摊销版本

较小的常数。

Or is there ever any reason to prefer the amortized version

Smaller constants.

倾城泪 2024-08-29 13:32:45

O(n) 算法和摊销 O(n) 算法之间的主要区别在于,您对摊销 O(n) 算法的最坏情况行为一无所知。在实践中,这通常并不重要:如果您的算法运行了很多次,那么您可以依靠平均律来平衡一些不好的情况,并且如果您的算法没有运行很多次,那么你就不太可能遇到最坏的情况。

“摊销”一词应该重要的唯一情况是您因某种原因无法接受偶尔表现不佳的情况。例如,在 GUI 应用程序中,您会很乐意放弃一些平均情况下的性能,以换取在用户坐在那里感到无聊时您永远不会陷入困境并停止响应的保证。在此类应用程序中,您需要确保即使是最坏情况的行为对于任何可能导致 GUI 停止响应的代码也是有益的。

不过,大多数时候,您不需要担心摊销 O(n) 与 O(n),您可以担心常数因子是什么(正如其他人已经说过的)。

The main difference between a O(n) algorithm and an amortized O(n) algorithm is that you don't know anything about the worst-case behavior of the amortized O(n) algorithm. In practice, that doesn't matter very often: If your algorithm is being run many times, then you can count on the law of averages to balance out a few bad cases, and if your algorithm isn't being run very many times, then it's unlikely that you'll ever hit the worst case at all.

The only cases where the word "amortized" should matter are cases where you can't accept occasional bad performance for some reason. For example, in GUI applications, you would happily give up some average-case performance in return for a guarantee that you'll never bog down and stop responding while the user is sitting there and getting bored. In such applications, you want to make sure that even the worst-case behavior is good for any code that could cause the GUI to stop responding.

Still, most of the time, you don't need to worry about amortized O(n) versus O(n), and you can worry instead about what the constant factors are (as others have already said).

情归归情 2024-08-29 13:32:45

大 O 表示法告诉您算法如何随着输入的增加而变化。
它不是分析代码的快捷方式。

对于程序中的所有 n,更好的算法可能是 O(n^2),因为 O(n) 中有一个更大的常数。

因此,您对算法的选择实际上取决于哪种算法对于您的输入大小来说更快。我想你问题的答案是分析程序中的两种算法,然后决定哪个更快。

Big O notation tells you about how your algorithm changes with growing input.
It is not a shortcut for profiling your code.

It is possible that a better algorithm is O(n^2) for all n in your program, because there is a constant in an O(n) that is larger.

So your choice of algorithms really depends on which algorithm is faster for your input size. I guess the answer to your question is to profile both algorithms in your program and then decide which is faster.

我纯我任性 2024-08-29 13:32:45

需要摊销算法的一个典型例子是 std::vector,其插入是摊销 O(1) 的。

使用摊销算法的一些原因:

  • 平均情况下效率更高。
  • 更容易实施。
  • 不存在最坏情况保证算法。

A classic example of the need for amortized algorithm would be std::vector for which insert is amortized O(1).

Some reasons to use amortized algorithms:

  • Much more efficient average case.
  • Easier Implementation.
  • No worst case guarantee algorithm exists.
白龙吟 2024-08-29 13:32:45

严格来说,big-O 的度量不够精确,因此可以说具有 O(n) 的算法比具有摊销 O(n) 的算法更快。想一想。如果您在复杂性分析中降低保真度,常数因子可能会显着不同,并使摊销版本更快。

此外,摊销的影响往往取决于使用情况。例如,如果您使用哈希表,摊销的影响将在很大程度上取决于您的获取与添加操作的比率。因此,如果您添加 1000 个条目,然后进行十亿次查找,那么您必须重新散列几次这一事实并没有多大区别。但如果您不断添加条目,则重新散列的成本可能会增加。

这就是摊销与最坏情况不同的原因。 Big-O 反映了最坏的情况,而摊销则让您可以说“每 x 中就有一个会受到打击,并且 x 足够大,因此无关紧要”。另外,如果考虑像插入哈希表这样的示例,x 会根据某个常数增长。因此,如果您的哈希表以 100 个存储桶开始,并且每次重新散列都会加倍,那么重新散列将渐近地变得越来越远。此外,具有摊销复杂度的算法的绝对最坏情况复杂度取决于先前的调用,而在像平均情况这样的度量中,调用被认为是独立的。

Strictly speaking big-O isn't precise enough of a measure to so be able to say that an algorithm with O(n) is faster than one with a amortized O(n). Think about it. If you drop down a level of fidelity in your complexity analysis, the constant factors may be significantly different and make the amortized version faster.

Also, the impact of amortization tends to depend on use. For example, if you're using a hash table, the impact of the amortization will be largely dependent on your ratio of get to add operations. So if you add 1000 entries, then do a billion lookups, the fact that you had to rehash a couple times doesn't make much of a difference. But if you're constantly adding entries the cost of rehashing might add up.

That's why amortization is different than worst case. Big-O reflects the worst case, while amortization lets you say "one in every x will take a hit, and x is sufficiently large that it doesn't matter." Also, if you consider examples like insertion into a hash table, x grows according to some constant. So if you have a hash table that starts with 100 buckets and doubles every rehash, then the rehashes are asymptotically going to get farther and farther apart. In addition, the absolute worst case complexity for an algorithm with an amortized complexity is dependent on previous calls, while in measures like average case the calls are considered independent.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文