I've only scanned through the paper, but here's a rough summary of how it all hangs together.
From page 86 of the paper.
... polynomial time algorithms succeed by successively “breaking up” the problem into smaller subproblems that are joined to each other through conditional independence. Consequently, polynomial time algorithms cannot solve problems in regimes where blocks whose order is the same as the underlying problem instance require simultaneous resolution.
Other parts of the paper show that certain NP problems can not be broken up in this manner. Thus NP/= P
Much of the paper is spent defining conditional independence and proving these two points.
Dick Lipton 有一篇关于这篇论文及其第一印象的精彩博客文章。不幸的是,这也是技术性的。据我了解,Deolalikar的主要创新似乎是使用统计物理学和有限模型理论中的一些概念并将它们与问题联系起来。
我同意 Rex M 的观点,有些结果(主要是数学结果)无法向缺乏技术掌握的人表达。
Dick Lipton has a nice blog entry about the paper and his first impressions of it. Unfortunately, it also is technical. From what I can understand, Deolalikar's main innovation seems to be to use some concepts from statistical physics and finite model theory and tie them to the problem.
I'm with Rex M with this one, some results, mostly mathematical ones cannot be expressed to people who lack the technical mastery.
His argument revolves around a particular task, the Boolean satisfiability problem, which asks whether a collection of logical statements can all be simultaneously true or whether they contradict each other. This is known to be an NP problem.
Deolalikar claims to have shown that there is no program which can complete it quickly from scratch, and that it is therefore not a P problem. His argument involves the ingenious use of statistical physics, as he uses a mathematical structure that follows many of the same rules as a random physical system.
The effects of the above can be quite significant:
If the result stands, it would prove that the two classes P and NP are not identical, and impose severe limits on what computers can accomplish – implying that many tasks may be fundamentally, irreducibly complex.
For some problems – including factorisation – the result does not clearly say whether they can be solved quickly. But a huge sub-class of problems called "NP-complete" would be doomed. A famous example is the travelling salesman problem – finding the shortest route between a set of cities. Such problems can be checked quickly, but if P ≠ NP then there is no computer program that can complete them quickly from scratch.
这是我对证明技术的理解:他使用一阶逻辑来表征所有多项式时间算法,然后表明对于具有某些属性的大型 SAT 问题,没有多项式时间算法可以确定其可满足性。
This is my understanding of the proof technique: he uses first order logic to characterize all polynomial time algorithms, and then shows that for large SAT problems with certain properties that no polynomial time algorithm can determine their satisfiability.
One other way of thinking about it, which may be entirely wrong, but is my first impression as I'm reading it on the first pass, is that we think of assigning/clearing terms in circuit satisfaction as forming and breaking clusters of 'ordered structure', and that he's then using statistical physics to show that there isn't enough speed in the polynomial operations to perform those operations in a particular "phase space" of operations, because these "clusters" end up being too far apart.
Such proof would have to cover all classes of algorithms, like continuous global optimization.
For example, in the 3-SAT problem we have to evaluate variables to fulfill all alternatives of triples of these variables or their negations. Look that x OR y can be changed into optimizing
((x-1)^2+y^2)((x-1)^2+(y-1)^2)(x^2+(y-1)^2)
and analogously seven terms for alternative of three variables.
Finding the global minimum of a sum of such polynomials for all terms would solve our problem. (source)
It's going out of standard combinatorial techniques to the continuous world using_gradient methods, local minims removing methods, evolutionary algorithms. It's completely different kingdom - numerical analysis - I don't believe such proof could really cover (?)
It's worth noting that with proofs, "the devil is in the detail". The high level overview is obviously something like:
Some some sort of relationship between items, show that this relationship implies X and that implies Y and thus my argument is shown.
I mean, it may be via Induction or any other form of proving things, but what I'm saying is the high level overview is useless. There is no point explaining it. Although the question itself relates to computer science, it is best left to mathematicians (thought it is certainly incredibly interesting).
发布评论
评论(7)
我只浏览了这篇论文,但这里是所有内容如何结合在一起的粗略总结。
从论文第86页开始。
论文的其他部分表明某些 NP 问题不能以这种方式分解。因此 NP/= P
本文的大部分内容都花在定义条件独立性并证明这两点上。
I've only scanned through the paper, but here's a rough summary of how it all hangs together.
From page 86 of the paper.
Other parts of the paper show that certain NP problems can not be broken up in this manner. Thus NP/= P
Much of the paper is spent defining conditional independence and proving these two points.
Dick Lipton 有一篇关于这篇论文及其第一印象的精彩博客文章。不幸的是,这也是技术性的。据我了解,Deolalikar的主要创新似乎是使用统计物理学和有限模型理论中的一些概念并将它们与问题联系起来。
我同意 Rex M 的观点,有些结果(主要是数学结果)无法向缺乏技术掌握的人表达。
Dick Lipton has a nice blog entry about the paper and his first impressions of it. Unfortunately, it also is technical. From what I can understand, Deolalikar's main innovation seems to be to use some concepts from statistical physics and finite model theory and tie them to the problem.
I'm with Rex M with this one, some results, mostly mathematical ones cannot be expressed to people who lack the technical mastery.
我喜欢这个( http://www.newscientist.com/article/dn19287-p--np-its-bad-news-for-the-power-of-computing.html):
上述效果可能相当显着:
I liked this ( http://www.newscientist.com/article/dn19287-p--np-its-bad-news-for-the-power-of-computing.html ):
The effects of the above can be quite significant:
这是我对证明技术的理解:他使用一阶逻辑来表征所有多项式时间算法,然后表明对于具有某些属性的大型 SAT 问题,没有多项式时间算法可以确定其可满足性。
This is my understanding of the proof technique: he uses first order logic to characterize all polynomial time algorithms, and then shows that for large SAT problems with certain properties that no polynomial time algorithm can determine their satisfiability.
另一种思考它的方式可能是完全错误的,但这是我在第一遍阅读时的第一印象是,我们认为在电路满意度中分配/清除项作为形成和破坏“有序”簇的过程。结构”,然后他使用统计物理学来表明多项式运算没有足够的速度来在特定的运算“相空间”中执行这些运算,因为这些“簇”最终相距太远。
One other way of thinking about it, which may be entirely wrong, but is my first impression as I'm reading it on the first pass, is that we think of assigning/clearing terms in circuit satisfaction as forming and breaking clusters of 'ordered structure', and that he's then using statistical physics to show that there isn't enough speed in the polynomial operations to perform those operations in a particular "phase space" of operations, because these "clusters" end up being too far apart.
这样的证明必须涵盖所有类别的算法,例如连续全局优化。
例如,在3-SAT问题中,我们必须评估变量以满足这些变量的三元组或其否定的所有替代方案。看看
x OR y
可以变成优化和类似的三个变量的替代的七个术语。
找到所有项的此类多项式之和的全局最小值将解决我们的问题。 (来源)
它超出了标准使用梯度方法、局部极小值去除方法、进化算法对连续世界进行组合技术。这是完全不同的王国 - 数值分析 - 我不相信这样的证明可以真正涵盖(?)
Such proof would have to cover all classes of algorithms, like continuous global optimization.
For example, in the 3-SAT problem we have to evaluate variables to fulfill all alternatives of triples of these variables or their negations. Look that
x OR y
can be changed into optimizingand analogously seven terms for alternative of three variables.
Finding the global minimum of a sum of such polynomials for all terms would solve our problem. (source)
It's going out of standard combinatorial techniques to the continuous world using_gradient methods, local minims removing methods, evolutionary algorithms. It's completely different kingdom - numerical analysis - I don't believe such proof could really cover (?)
值得注意的是,有了证据,“细节决定成败”。高层概述显然是这样的:
我的意思是,它可能是通过 归纳 或任何其他形式的证明事情,但我'我想说的是,高层次的概述是没有用的。没有必要解释它。尽管这个问题本身与计算机科学有关,但最好还是留给数学家(认为这确实非常有趣)。
It's worth noting that with proofs, "the devil is in the detail". The high level overview is obviously something like:
I mean, it may be via Induction or any other form of proving things, but what I'm saying is the high level overview is useless. There is no point explaining it. Although the question itself relates to computer science, it is best left to mathematicians (thought it is certainly incredibly interesting).