不同编程范式的算法复杂度

发布于 2024-10-13 19:08:36 字数 260 浏览 3 评论 0原文

我知道大多数编程语言都是图灵完备的,但我想知道是否可以使用任何编程语言(特别是任何编程范例)具有相同复杂性的算法来解决问题。

为了让我的答案更明确,举个例子:是否有任何问题可以通过复杂性 x (比如 O(n))的命令式算法来解决,但不能通过具有相同复杂度的函数算法来解决(或反之亦然)?

编辑: 算法本身可能不同。问题在于解决问题的复杂性——使用语言中可用的任何方法。

I know that most programming languages are Turing complete, but I wonder whether a problem can be resolved with an algorithm of the same complexity with any programming language (and in particular with any programming paradigm).

To make my answer more explicit with an example: is there any problem which can be resolved with an imperative algorithm of complexity x (say O(n)), but cannot be resolved by a functional algorithm with the same complexity (or vice versa)?

Edit: The algorithm itself can be different. The question is about the complexity of solving the problem -- using any approach available in the language.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

只等公子 2024-10-20 19:08:36

一般来说,不,并非所有算法都可以在所有语言中以相同的复杂度来实现。例如,可以使用不允许 O(1) 访问数组的假设语言来简单地证明这一点。然而,据我所知,没有任何算法不能以函数式语言的最佳复杂度顺序来实现。算法伪代码的复杂性分析对哪些操作是合法的以及哪些操作是 O(1) 做出了某些假设。如果您打破这些假设之一,即使该语言是图灵完备的,您也可以改变算法实现的复杂性。图灵完备性不保证任何操作的复杂性。

In general, no, not all algorithms can be implemented with the same order of complexity in all languages. This can be trivially proven, for instance, with a hypothetical language that disallows O(1) access to an array. However, there aren't any algorithms (to my knowledge) that cannot be implemented with the optimal order of complexity in a functional language. The complexity analysis of an algorithm's pseudocode makes certain assumptions about what operations are legal, and what operations are O(1). If you break one of those assumptions, you can alter the complexity of the algorithm's implementation even though the language is Turing complete. Turing-completeness makes no guarantees regarding the complexity of any operation.

揽清风入怀 2024-10-20 19:08:36

算法有一个可测量的运行时间,例如您所说的 O(n),算法的实现必须遵守相同的运行时间,否则它们不会实现该算法。根据定义,语言或实现不会改变算法,因此不会改变渐近运行时。

也就是说,某些语言和技术可能会使算法的表达变得更容易,并由于语言的编译或执行方式而提供持续的加速(或减速)。

An algorithm has a measured runtime such as O(n) like you said, implementations of an algorithm must adhere to that same runtime or they do not implement the algorithm. The language or implementation does not by definition change the algorithm and thus does not change the asymptotic runtime.

That said certain languages and technologies might make expressing the algorithm easier and offer constant speedups (or slowdowns) due to how the language gets compiled or executed.

鱼忆七猫命九 2024-10-20 19:08:36

我认为你的第一段是错误的。我认为你的编辑不会改变这一点。

假设您要求实现的观察到的行为符合算法的时间复杂度,那么......

在计算算法的复杂度时,假设哪些操作是常数时间。您将在这些假设中找到线索。

一些更常见的假设包括恒定时间数组访问、函数调用和算术运算等。

如果您无法在恒定时间内以语言提供这些操作,则无法以保留时间复杂度的方式重现算法。

合理的语言可以打破这些假设,有时如果它们想要处理具有共享状态、并发性等的不可变数据结构,则有时必须打破这些

假设。例如,Clojure 使用树来表示向量。这意味着访问时间不是恒定的(我认为它是数组大小的 log32,但这不是恒定的,尽管它也可能是恒定的)。

您可以轻松想象一种语言在调用函数时必须在运行时执行复杂的操作。例如,决定要指哪一个。

曾几何时,浮点和多字整数乘法和除法并不是恒定时间(它们是在软件中实现的)。有一段时间,语言过渡到使用硬件,当时非常合理的语言实现的行为非常不同。

我也很确定你可以想出在不可变数据结构的世界中表现很差的算法。我见过一些优化算法,在不破坏时间复杂性的情况下处理不变性的同时实现起来非常困难,也许不可能或有效。

就其价值而言,有些算法假设集合并集和交集是恒定时间......祝您在恒定时间内实现这些算法好运。还有一些算法使用“预言机”,可以在恒定的时间内回答问题……祝你好运。

I think your first paragraph is wrong. And I think your edit doesn't change that.

Assuming you are requiring that the observed behaviour of an implementation conforms to the time complexity of the algorithm, then...

When calculating the complexity of an algorithm assumptions are made about what operations are constant time. These assumptions are where you're going to find your clues.

Some of the more common assumptions are things like constant time array access, function calls, and arithmetic operations.

If you cannot provide those operations in a language in constant time you cannot reproduce the algorithm in a way that preserves the time complexity.

Reasonable languages can break those assumptions, and sometimes have to if they want to deal with, say, immutable data structures with shared state, concurrency, etc.

For example, Clojure uses trees to represent Vectors. This means that access is not constant time (I think it's log32 of the size of the array, but that's not constant even though it might as well be).

You can easily imagine a language having to do complicated stuff at runtime when calling a function. For example, deciding which one was meant.

Once upon a time floating point and multi-word integer multiplication and division were sadly not constant time (they were implemented in software). There was a period during which languages transitioned to using hardware when very reasonable language implementations behaved very differently.

I'm also pretty sure you can come up with algorithms that fare very poorly in the world of immutable data structures. I've seen some optimisation algorithms that would be horribly difficult, maybe impossible or effectively so, to implement while dealing immutability without breaking the time complexity.

For what it's worth, there are algorithms out there that assume set union and intersection are constant time... good luck implementing those algorithms in constant time. There are also algorithms that use an 'oracle' that can answer questions in constant time... good luck with those too.

木森分化 2024-10-20 19:08:36

我认为一种语言可以有不同的基本运算,其成本为 O(1),例如数学运算(+、-、*、/)或变量/数组访问(a[i])、函数调用以及您能想到的一切。

如果一种语言没有这种 O(1) 操作之一(就像没有 O(1) 数组访问的大脑弯曲),它无法以相同的复杂性完成 C 可以做的所有事情,但如果一种语言有更多的 O(1 )操作(例如具有 O(1) 数组搜索的语言)它可以比 C 执行更多操作。

我认为所有“严肃”语言都具有相同的基本 O(1) 操作,因此它们可以解决具有相同复杂性的问题。

I think that a language can have different basilar operations that cost O(1), for example mathematical operations (+, -, *, /), or variable/array access (a[i]), function call and everything you can think.

If a language do not have one of this O(1) operations (as brain bending that do not have O(1) array access) it can not do everything C can do with same complexity, but if a language have more O(1) operations (for example a language with O(1) array search) it can do more than C.

I think that all "serious" language have the same basilar O(1) operations, so they can resolve problem with same complexity.

笑脸一如从前 2024-10-20 19:08:36

如果你考虑 Brainfuck 或图灵机本身,有一个基本操作,需要 O(n ) 时间,尽管在大多数其他语言中,它可以在 O(1) 内完成——索引数组。

我对此并不完全确定,但我认为在函数式编程中也不可能拥有真正的数组(具有 O(1)“在位置获取元素” O(1)“设置元素在位置”)。由于不变性,您可以拥有一个可以快速更改的结构,但访问它需要时间,或者您必须在每次更改时复制该结构的大部分内容才能快速访问。但我想你可以使用 monad 来欺骗它。

If you consider Brainfuck or the Turing machine itself, there is one fundamental operation, that takes O(n) time there, although in most other languages it can be done in O(1) – indexing an array.

I'm not completely sure about this, but I think you can't have true array in functional programming either (having O(1) “get element at position” and O(1) “set element at position”). Because of immutability, you can either have a structure that can change quickly, but accessing it takes time or you will have to copy large parts of the structure on every change to get fast access. But I guess you could cheat around that using monads.

盗琴音 2024-10-20 19:08:36

看看功能性与命令性之类的东西,我怀疑您不会发现任何真正的差异。

不过,查看个别语言和实现则是另一回事。暂时忽略 Brainfuck 等的例子,仍然有一些相当不错的例子可以找到。

我还记得很多年前的一个例子,在大型机上编写 APL。任务是在已排序的数字数组中查找(并消除)重复项。当时,我所做的大部分编程都是用 Fortran 语言完成的,还有少量的 Pascal(仍然是当时最新、最伟大的东西)或 BASIC 语言。我做了看似显而易见的事情:编写了一个遍历数组的循环,将 array[i] 与 array[i+1] 进行比较,并跟踪几个索引,将每个唯一元素复制回适当数量的位置,具体取决于已消除的元素数量。

虽然这在我习惯的语言中运行得很好,但在 APL 中几乎是一场灾难。效果更好的解决方案更多地基于 APL 中的简单性,而不是计算复杂性。具体来说,您所做的是将数组的第一个元素与数组“旋转”一个元素后的第一个元素进行比较。然后,您要么保持数组不变,要么删除最后一个元素。重复此操作,直到遍历整个数组(我记得,当第一个元素小于旋转数组中的第一个元素时检测到)。

区别相当简单:像大多数 APL 实现一样(至少在当时),这个是一个纯粹的解释器。单个操作(即使是非常复杂的操作)通常非常快,但解释输入文件需要相当多的时间。改进的版本更短且解释速度更快(例如,APL 将“旋转数组”作为单个原始操作提供,因此只需解释一两个字符而不是循环)。

Looking at things like functional versus imperative, I doubt you'll find any real differences.

Looking at individual languages and implementations is a different story though. Ignoring, for the moment, the examples from Brainfuck and such, there are still some pretty decent examples to find.

I still remember one example from many years ago, writing APL (on a mainframe). The task was to find (and eliminate) duplicates in a sorted array of numbers. At the time, most of the programming I'd done was in Fortran, with a few bits and pieces in Pascal (still the latest and greatest thing at the time) or BASIC. I did what seemed obvious: wrote a loop that stepped through the array, comparing array[i] to array[i+1], and keeping track of a couple of indexes, copying each unique element back the appropriate number of places, depending on how many elements had already been eliminated.

While this would have worked quite well in the languages to which I was accustomed, it was barely short of a disaster in APL. The solution that worked a lot better was based more on what was easy in APL than computational complexity. Specifically, what you did was compare the first element of the array with the first element of the array after it had been "rotated" by one element. Then, you either kept the array as it was, or eliminated the last element. Repeat that until you'd gone through the whole array (as I recall, detected when the first element was smaller than the first element in the rotated array).

The difference was fairly simple: like most APL implementations (at least at the time), this one was a pure interpreter. A single operation (even one that was pretty complex) was generally pretty fast, but interpreting the input file took quite a bit of time. The improved version was much shorter and faster to interpret (e.g., APL provides the "rotate the array" thing as a single, primitive operation so that was only a character or two to interpret instead of a loop).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文