为什么停机问题使软件无法确定算法的时间复杂度

发布于 2024-12-03 02:56:50 字数 352 浏览 2 评论 0原文

我读过一些关于大哦计算和停止问题的文章。显然,不可能让所有算法都知道它们是否会停止,例如:

while(System.in.readline()){

}

但是,这样一个程序的大问题是什么?我认为它没有被定义,出于同样的原因,不可能说它是否会停止。你不知道这一点。

所以......有一些可能的算法,你不能说它是否会停止。但如果你不能说,那么该算法的大哦根据定义是未定义的。

现在来说说我的观点,计算一个软件的大哦。为什么你不能编写一个程序来做到这一点?因为它要么是一个函数,要么没有定义。

另外,我还没有说过任何关于编程语言的事情。纯函数式编程语言怎么样?那里可以计算吗?

I've read some articles about big-Oh calculation and the halting problem. Obviously it's not possible for ALL algoritms to say if they ever are going to stop, for example:

while(System.in.readline()){

}

However, what would be the big-Oh of such a program? I think it's not defined, for the same reason it's not possible to say if it's ever going to halt. You don't know that.

So... There are some possible algorithms, where you cannot say if it's ever going to halt. But if you can't say the, the big-Oh of that algorithm is by definition undefined.

Now to my point, calculating the big-oh of a piece of software. Why can't you write a program that does that? Because it is either a function, or not defined.

Also, I've not said anything about the programming language. What about a purely functional programming language? Can it be calculated there?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

戈亓 2024-12-10 02:56:50

好的,让我们来谈谈图灵机(可以使用随机访问模型进行类似的讨论,但为了简单起见我采用这个)。

TM 时间复杂度的上限说明了 TM 进行的转换数量根据输入大小而增长的速率顺序。具体来说,如果我们说 TM 执行的算法在输入大小 n 的最坏情况下为 O(f(n)),那么我们就是说存在 n0 和 c,使得 n > 。 n0,T(n)≤cf(n)。到目前为止,一切都很好。

现在,图灵机的问题是它们可能无法停止,也就是说,它们可以针对某些输入永远执行。显然,如果对于某些 n* > n0 a TM 需要无数步,没有 f(n) 满足上一段中列出的条件(n0, c 有限);也就是说,对于任何 f,T(N) != O(f(n))。好的;如果我们能够肯定地说,对于长度至少为 n0 的所有输入(对于某些 n0),TM 都会停止,那么我们就没事了。问题是,这相当于解决了停机问题。

所以我们得出这样的结论:如果 TM 在输入 n > 时永远停止n0,那么我们无法定义复杂度的上限;此外,通过算法确定 TM 是否会在所有大于固定有限 n0 的输入上停止是一个无法解决的问题。

OK, so let's talk about Turing machines (a similar discussion using the Random-Access model could be had, but I adopt this for simplicity).

An upper-bound on the time complexity of a TM says something about the order of the rate at which the number of transitions the TM makes grows according to the input size. Specifically, if we say a TM executes an algorithm which is O(f(n)) in the worst case for input size n, we are saying that there exists an n0 and c such that, for n > n0, T(n) <= cf(n). So far, so good.

Now, the thing about Turing machines is that they can fail to halt, that is, they can execute forever for some inputs. Clearly, if for some n* > n0 a TM takes an infinite number of steps, there is no f(n) satisfying the condition (with finite n0, c) laid out in the last paragraph; that is, T(N) != O(f(n)) for any f. OK; if we were able to say for certain that a TM would halt for all inputs of length at least n0, for some n0, we're home free. Trouble is, this is equivalent to solving the halting problem.

So we conclude this: if a TM takes forever to halt on an input n > n0, then we cannot define an upper bound on complexity; furthermore, it is an unsolvable problem to algorithmically determine whether the TM will halt on all inputs greater than a fixed, finite n0.

或十年 2024-12-10 02:56:50

无法回答“‘while(System.in.readline()){}’程序会停止吗?”这个问题的原因是未指定输入,因此在此特殊情况下,问题是缺乏信息而不是不可判定性。

停止问题是关于构造一个通用算法的不可能性,当提供程序和输入时,该通用算法总是可以判断具有该输入的程序是否将完成运行或继续永远运行。

在停止问题中,程序和输入都可以是任意大,但它们应该是有限的。

此外,不存在本身不可判定的“程序+输入”的特定实例:给定一个特定实例,(原则上)总是可以构造一个分析该实例和/或类实例,并计算正确答案。

然而,如果问题是不可判定的,那么无论算法扩展多少次以正确分析其他实例或实例类,该过程都将永远不会结束:总是有可能提出算法不会出现的新实例。除非进一步扩展,否则无法回答。

我想说“while(System.in.readline()){}”的大 O 是 O(n),其中 n 是输入的大小(该程序可以被视为例如行计数程序的框架)。

在这种情况下定义了大 O,因为对于每个有限大小的输入,程序都会停止。

因此要问的问题可能是:“程序是否会在其可能提供的每个可能的有限输入上停止?”

如果该问题可以简化为停止问题或任何其他不可判定的问题,那么它是不可判定的。

事实证明它可以减少,如下所示:
https://cs.stackexchange.com/questions/41243/ 不确定性

是问题的一个属性,并且独立于用于构造在同一机器上运行的程序的编程语言。例如,您可能会认为用函数式编程语言编写的任何程序都可以编译为机器代码,但相同的机器代码可以由用汇编语言编写的等效程序生成,因此从图灵机的角度来看,函数式编程语言不再是比汇编语言强大。

此外,不可判定性不会阻止算法为无数(理论上无限)个程序计算大 O,因此为此目的构建算法的任何努力不一定是毫无意义的。

The reason it is impossible to answer the question "is the 'while(System.in.readline()){}' program going to stop?" is that the input is not specified, so in this particular case the problem is lack of information and not undecidability.

The halting problem is about the impossibility of constructing a general algorithm which, when provided with both a program and an input, can always tell whether that program with that input will finish running or continue to run forever.

In the halting problem, both program and input can be arbitrarily large, but they are intended to be finite.

Also, there is no specific instance of 'program + input' that is undecidable in itself: given a specific instance, it is (in principle) always possible to construct an algorithm that analyses that instance and/or class of instances, and calculates the correct answer.

However, if a problem is undecidable, then no matter how many times the algorithm is extended to correctly analyse additional instances or classes of instances, the process will never end: it will always be possible to come up with new instances that the algorithm will not be capable of answering unless it is further extended.

I would say that the big O of "while(System.in.readline()){}" is O(n) where n is the size of the input (the program could be seen as the skeleton of e.g. a line counting program).

The big O is defined in this case because for every input of finite size the program halts.

So the question to be asked might be: "does a program halt on every possible finite input it may be provided with?"

If that queston can be reduced to the halting problem or any other undecidable problem then it is undecidable.

It turns out that it can be reduced, as clarified here:
https://cs.stackexchange.com/questions/41243/halting-problem-reduction-to-halting-for-all-inputs

Undecidability is a property of problems and is independent of programming languages that are used to construct programs that run on the same machines. For instance you may consider that any program written in a functional programming language can be compiled into machine code, but that same machine code can be produced by an equivalent program written in assembly, so from a Turing machine perspective, functional programming languages are no more powerful than assembly languages.

Also, undecidability would not prevent an algorithm from being able to calculate the big O for a countless (theoretically infinite) number of programs, so any effort in constructing an algorithm for that purpose would not necessarily be pointless.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文