There are algorithms for many NP-complete problems, such as the knapsack problem , the traveling salesman problem and the Boolean satisfiability problem , that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity time vs. An example is the simplex algorithm in linear programming , which works surprisingly well in practice; despite having exponential worst-case time complexity it runs on par with the best known polynomial-time algorithms. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than important known NP-complete problems see List of NP-complete problems. It is also intuitively argued that the existence of problems that are hard to solve but for which the solutions are easy to verify matches real-world experience. This is, in my opinion, a very weak argument.
|Published (Last):||15 April 2014|
|PDF File Size:||3.13 Mb|
|ePub File Size:||11.37 Mb|
|Price:||Free* [*Free Regsitration Required]|
First draft , Aug 6, File overwritten several times and then finally removed, Aug 17 Second draft Aug 9, File removed, Aug 10 File removed, Aug 11 Third draft , Aug 11 File removed, Aug 17 Synopsis of proof , Aug 13 File removed HP Technical report , Aug 20, File removed, Aug 23 ?
Here is the list of updates between the different versions. Typos and minor errors Any typos appearing in an earlier draft that no longer appear in the latest draft should be struck out.
Second draft, page 31, Definition 2. One fix is to move "in Chapter 5" to the beginning of the sentence. Still present in third draft, but now on page 16 and referring to Chapter 6 instead. Third draft, p. Also, this neighborhood contains self-loops and so is technically not a neighborhood system in the sense of Definition 2.
Proof strategy Excerpted from this comment of Ken Regan Deolalikar has constructed a vocabulary V which apparently obeys the following properties: Satisfiability of a k-CNF formula can be expressed by NP-queries over V—in particular, by an NP-query Q over V that ties in to algorithmic properties. An alternate perspective Leonid Gurvits And finally, using known probabilistic results, he gets nonmatching exponential? If I am right, then this approach is a very clever, in a good sense elementary, way to push the previous algebraic-geometric approaches.
Of course, one can forget all but the original n variables and return to a distribution on the solution space by projecting, but it is not clear that the polylog parameterizable property is preserved by this there is no reason why the directed acyclic graph on the N literals has any meaningful projection onto the original n literals.
In particular, the projected distribution is not in a recursively factorizable form, and the cluster geometry of the lifted solution space could be quite different from that of the original k-SAT problem e. This issue does not seem to be discussed at all in the paper. To phrase it another way, consider the following excerpt from the paper describing part of the strategy Page 92, third draft : We have embedded our original set of variates into a polynomially larger product space, and obtained a directed graphical model on this larger space.
This product space has a nice factorization due to the directed graph structure. This is what we will exploit. Given that the proof sketch of P! The problem in the LFP section of the proof: Anuj Dawar raises a point about order-invariance in intermediate stages of the LFP computation: Another problem I see is the order-invariance.
While it is clearly true that the LFP formula itself would be order-invariant, this does not mean that each stage would. So, it seems to me that this proof shows at best that k-SAT is not definable by an LFP each stage of which is order-invariant. This has an easy direct proof. Claim 1: The solution spaces of problems in P are polylog-parametrizable. Claim 2: The solution space of k-SAT is not polylog-parametrizable.
Caution: the term "solution spaces" requires some clarification. Now, Claim 1 could be split as follows: Claim 1. Claim 1. From these Claim 1 would follow from the Immerman-Vardi Theorem we need successor or linear order of course.
Perhaps Claim 1. But Claim 1. The reduction that Deolalikar proposes is standard but definitely does not preserve anything on which we can apply Hanf-Gaifman locality.
However, this is only true in the original LFP. When encoding the LFP into a monadic LFP as is done immediately afterwards in the same remark, the relation becomes a section of a relation of higher arity as mentioned in page 87 and Appendix A , using an induction relation. However, this induction relation itself must be monadically encoded, and it may not have bounded degree.
In fact, because of successor, it could define an ordering in two of its columns, which would render the Gaifman graph useless indeed, the Gaifman graph could even collapse to have diameter one. Hanf-Gaifman locality also does not apply if we work with k-ary LFP directly without the reduction to the monadic case.
Here the answer is that if we work with k-ary LFP directly then the Gaifman graph changes after each iteration of the LFP-computation because we need to take care of the new tuples that are added to the k-ary inductively defined relation.
Neil Immerman has written a very interesting letter to Deolalikar critiquing his proof. Immerman also raises the problem of order-invariance mentioned by Anuj Dawar above. Thus, if the argument were valid, there must be a specific step in the argument which worked for k-SAT but not for k-XORSAT; but no such specific step has been identified.
There appear to be three issues related to the use of the characterization of P in terms of first order logic, an ordering and a least fixed point operator. Ordering, or lack thereof: Is the lack of ordering in the logical structures used to define the LFP structure a problem since parity can not be expressed without an ordering even with LFP, hence P is not captured without order.
This would avoid the problem of the order The issue of tupling: The paper requires that a certain predicate in the FO LFP formula be unary, and forces this by expanding neighborhoods and constructing k-tuples of parameters to act as single parameters. It is not clear how this affects the arguments about the propagation of local neighborhoods. Albert Atserias says , " The only justification for this crucial step seems to be Remark 7. The standard constructions of the so-called canonical structures that Vinay refers to see Ebbinghaus and Flum book in page 54 have a Gaifman graph of constant diameter, even without the linear order, due to the generalized equalities that allow the decoding of tuples into its components.
Issues along these lines were raised before here and in comment 54 here Steven Lindell presents a detailed critique of this problem , with an indication that there might be insurmountable problems.
It is reproduced here for completeness. Boundedness and greatest fixed points: Charanjit Jutla has pointed out that the argument in section 4. But, in the next section 4. Issues with phase transitions A brief synopsis of the terms discussed can be found here The nomenclature of phase transitions: In the statistical physics picture, there is not a single phase transition, but rather a set of different well defined transitions called clustering equivalently d1RSB , condensation, and freezing Florent Krzakala and Lenka Zdeborova.
In the current version of the paper, properties of d1RSB clustering , and freezing are mixed-up. Whereas following the established definitions, and contrary to some earlier conjectures, it is now agreed that some polynomial algorithms work beyond the d1RSB clustering or condensation thresholds.
Graph coloring provides some evidence of this when one compares the performance of algorithms with the statistical physics predictions. The property of the solution space of random K-SAT the paper is actually using is called freezing. It was conjectured in the statistical physics community Florent Krzakala , Lenka Zdeborova and Cris Moore that really hard instances appears in the frozen phase, i.
Existence of such a region was proven rigorously by Achlioptas and Ricci-Tersenghi and their theorem appears as Theorem 5. Similar problem might exist in other restricted CSP which are in P, but may exhibit freezing stage, as pointed by several other people. Issues with random k-SAT Complex solution spaces are uncorrelated with time complexity.
The below is a greatly expanded version of a series of twitter comments by Ryan Williams, on twitter The author tries to use the fact that for certain parameterizations, the space of satisfying assignments to a random k-SAT instance the "solution space" has some intriguing structure.
There are two "meta" objections to this. One is that "intriguing structure in the solution space is not sufficient for NP hardness". The second is that "intriguing structure is not necessary for NP hardness".
But they do appear to give an obstacle to the general proof method. How to map satisfiable formulas to trivial problems with exactly the same solution space structure. Very easy problems can also have complicated solution distributions. The following arose from discussions with Jeremiah Blocki. Here is one easy way to non-uniformly!
The map completely preserves the number of variables, the number of satisfying assignments, the distances between assignments, the clusters--as far as I Ryan Williams can tell it preserves every property that has been studied about random k-SAT.
This should be a barrier to any proof of P vs NP that attempts to argue that the solution space of an NP-hard problem is "too complex" for P. This is the objection which is most germane to the current proposed proof. Observe this map completely preserves all distances between assignments, clusters, etc. That is, while the solution spaces of some random formulas may look complicated, it could still be that these formulas constitute a polynomial-time solvable problem for trivial reasons, independently of what the solution space looks like.
One can in fact get away with a map that just permutes the names of variables. The above arguments look like cheats. Of course they are! We are exploiting the fact that, for arbitrary problems in NP and therefore P as well , the solution space of that problem is not uniquely well-defined in general.
Hubie Chen raised a similar point in email correspondence. A solution space for an instance can only be defined with respect to some polynomial time verifier for the problem: this space is the set of witnesses that make the verifier accept on the instance. If you change the verifier, you change the solution space. The above argument only arises because the notion of solution space was forced to be verifier-independent over P and NP problems which looks critical to the P vs NP paper.
A hard distribution of solutions is not necessary for NP-hardness, either. In fact the "hard" case of 3-SAT is the case where there is at most one satisfying assignment, since there is a randomized reduction from 3-SAT to 3-SAT with at most one satisfying assignment Valiant-Vazirani. Note that, if plausible circuit lower bounds hold up, then Valiant-Vazirani can be derandomized to run in deterministic polynomial time. To prove just one NP-complete problem has a complex solution space would be enough if it was also proved that all P problems have simple solution spaces.
But it is hard to make sense of what this proposition could really mean, in light of the above. To summarize, there is little correlation between the "hard structure" of the solution space for instances of some problem, and the NP-hardness of that problem.
The "boundary" between P and NP does not lie between "hard" and "easy" satisfiable formulas, but rather it lies between the satisfiable and unsatisfiable formulas.
P != NP by Deolalikar
Colleagues immediately began dissecting the proof on academic blogs and wikis. A solid proof would earn Deolalikar fame and fortune. It seeks to determine—once and for all—which kinds of problems can be solved by computers, and which kinds cannot. Imagine a jigsaw puzzle: finding the right arrangement of pieces is difficult, but you can tell when the puzzle is finished correctly just by looking at it. NP-class problems include many pattern-matching and optimization problems that are of great practical interest, such as determining the optimal arrangement of transistors on a silicon chip, developing accurate financial-forecasting models, or analyzing protein-folding behavior in a cell. If P equals NP, every NP problem would contain a hidden shortcut, allowing computers to quickly find perfect solutions to them.
P versus NP problem
What Does ‘P vs. NP’ Mean for the Rest of Us?