Dear Fellow Researchers,

Would you kindly pay attention to the article “Non-Orthodox Combinatorial Models Based on Discordant Structures” by the author V. F. Romanov published on arXiv.org: http://arxiv.org/abs/1011.3944 (download: PDF, initial format LaTeX, 12pt).

The article presents a constructive proof of effective resolvability of 3-SAT problem, accompanied by description of a polynomial algorithm created for the named purpose.

The proof uses a unique graph-combinatorial model based on the Boolean formulas representation in the form of structures of compact triplets. The proof procedure required the induction principle applied to special constructive components which are systems of hyperstructures.

This work was developed for a period of 10 years by private initiative independently of my professor duties in Vladimir State University. For this time work has been published in several Russian scientific journals. Also two independent versions of the algorithm in programming languages have been implemented.

The first version was developed in parallel with theoretical positions and had been finished in 2002. On its basis statistical computer-aided experiment which is mentioned in paper was accomplished.

The second version was implemented by my colleagues in the end of 2010. In the course of work they used only ready article, in passing offering remarks to the text for the purpose of improvement of its understanding. Successful realization of the algorithm testified to sufficiency of the material stated in the article. Source code of the program is accessible in Internet under license LGPL version 3: https://github.com/anjlab/sat3, also http://romvf.wordpress.com

The fact of existence of the polynomial algorithm for 3-SAT problem leads to a conclusion that P=NP.

Your opinion and suggestions concerning my work would be of great value for me.

Yours faithfully, the author Vladimir Romanov

### Like this:

Like Loading...

Interessing discussion on your article http://news.ycombinator.com/item?id=2121727

Pingback: Proof that P is not Equal to NP? | Volkan Tunalı - Computer Engineer MSc, & PhD Candidate

I think the procedure for finding JSS for several subformulas should be explained in more detail, it’s very hard to understand this part of the article (this isn’t just my opinion, someone from Y-Combinator also called it ‘handwaving’).

I have found example of (unsatisfiable) CNF on which your program (version 1.0.3) fails with the following error:

`Implementation error: 60 tier was built with errors. Please provide CNF file to developers.`

Implementation error: 61 tier was built with errors. Please provide CNF file to developers.

Implementation error: 62 tier was built with errors. Please provide CNF file to developers.

Implementation error: 63 tier was built with errors. Please provide CNF file to developers.

The file is from SAT competition 2009 examples, crafted ( http://www.cril.univ-artois.fr/SAT09/bench/crafted.7z ), SAT07/crafted/Medium/contest05/jarvisalo/mod2-3cage-unsat-9-12.sat05-2587.reshuffled-07.cnf

Also, while running your program on uuf100-0345.cnf benchmark from SATLIB, I noticed that displayed value of nextTierIndex sometimes decreases. For example, 37-38-39-40-41-42-43-44, then 43-44-45, 42-43-44-45-46, again 42-43-44-45-46-47-48, then 47-48-49, 47-48-49-50 and so on in groups consisting of 3-4 consecutive values. Since I didn’t understand the algorithm, it looks suspiciously like backtracking search for me. It can be helpful if you explain why it isn’t exponential.

Thanks for taking your time. What you’ve found is an implementation bug: https://github.com/anjlab/sat3/issues/closed#issue/1

Its fixed in master branch. I just created new build with this issue fixed: https://github.com/downloads/anjlab/sat3/3-sat-experiment-2.0.0-SNAPSHOT-bin.zip

Please try this version instead.

The output of this version differs from 1.0.3 version. We will update wiki and publish final 2 .0.0 release soon.

The value of nextTierIndex decreases when some vertex get removed from basic graph, this will lead to the fact that some tiers above that already been calculated will be changed (some vertices will be removed from them). In this case we should rebuild those tiers.

Update:You can find recent version of the solver at project’s downloads page: https://github.com/anjlab/sat3/downloads.

I think the procedure for finding JSS for several subformulas should be explained in more detail, it’s very hard to understand this part of the article (this isn’t just my opinion, someone from Y-Combinator also called it ‘handwaving’).

I know there’s just a hint for finding JSS in the article. We were not planned to include this algorithm there, but you can observe it in source code (method findHSSRouteByReduce https://github.com/anjlab/sat3/blob/cbe4402223eaa2f062534b7c65425f1f6b704d34/3-sat-core/src/main/java/com/anjlab/sat3/Helper.java#L2561).

I’d agree with this comment: http://news.ycombinator.com/item?id=2122409

> – It not only tells you whether the formula is satisfiable or not, but gives you the satisfying values.

Any yes/no NP solver can be turned into one that does this with polynomial overhead. After finding there is a solution, you ask it the similar problem “does this have solution with the first bit set to true?”, then “does this have a solution with the first bit set to and the second bit set to true?”, and so forth until you’ve identified all bits.

What I can suggest right now is answer some concrete questions of what you don't understand here. I will ask Vladimir Romanov if we can expand this part of article in future versions of the paper, or maybe we will publish it as a separate article.

Pingback: Polynomial Time Code For 3-SAT Released, P==NP | JetLib News

Just a note concerning the paper itself.

It shows a conspicuous lack of grammatical editing and several of the exemplars (notably the labels of Table 2 and the complete lack of formula identification) are inconsistent, making following your work difficult. Taking aside the factuality and reason of your paper, these things limit review of the method itself in a way that is quite suspect.

Another issue is that you have chosen to use the phrase “it is obvious that” (Notably in the CTF-CTS transform) which translates to “I have assumed that you’ll let me get away with stating” to most readers. If this work is to be taken seriously by the community and accepted as a proof of anything you have to prove that these things can be done rather than assuming that they can.

Just wanted to add something, though you didn’t have anything to say, eh?

There’s more than a grain of truth in Adam’s comments.

The paper does seem unnecessarily difficult to digest.

I suspect that the good Professor Romanov is not particularly fluent in the English language, which is understandable. I’m not fluent in any natural language other than English, and the same is true of most native English speakers.

I am glad, though, that Vladimir Romanov made the effort to present us with an English version of his paper and that there’s a reference implementation available.

Hi, might this algorithm be a “fixed-parameter-tractable” algorithm? AFAIK 3-SAT has been shown to be in FPT (with the treewidth of the incidence-graph as parameter)?

Pingback: 3-SAT polynomal time solution? If true then P==NP | Distributed Life

Pingback: Possibly the most important news you will read this year | cartesian product

A while back I converted RC5 into 3SAT. The result was a 60mb or so text file that, if solved, yields the key bits for the 72-bit RC5 challenge.

The thing is it has millions of “internal” variables which are introduced because RC5 was represented as a directed acyclic graph, and sometimes one node (xor or AND operator) needed to be used multiple times.

I converted the RC5 problem graph into equations of the form (a OR b OR !c) AND (d OR !b OR e) … = TRUE

The trick (implemented in C++) was to first express RC5 as a directed acyclic graph of 32-bit words, then decompose it into XOR and AND logic gates and individual bits. The final step was to introduce new variables (of which there are many millions needed) for each node.

I think I created 2 or 3 clauses per XOR / AND gate. It worked, but CryptoMiniSat couldn’t solve it after over 24 hours of waiting.

I don’t know if your program can support a problem as big as a 60mb text file….

Nope, the program crashed when I tried to run it on my CNF.

The CNF is known to be satisfiable and can be found (in non-CNF form) here:

http://options.googlecode.com/files/rc5_3sat.zip

Input:

p cnf 450415 1350064

-1 0

2 0

65536 -3 -4 0

-65536 3 0

-65536 4 0

.

.

. (lots more lines)

Output:

18:25:59,783|5943 INFO [StopWatch] – Create CTF…

Program completed

Exception in thread “main” java.lang.OutOfMemoryError: Java heap space

at cern.colt.map.OpenIntIntHashMap.rehash(Unknown Source)

at cern.colt.map.OpenIntIntHashMap.put(Unknown Source)

at com.anjlab.sat3.SimplePermutation.quickAdd(SimplePermutation.java:93)

at com.anjlab.sat3.SimplePermutation.add(SimplePermutation.java:83)

at com.anjlab.sat3.SimpleFormula.ensurePermutationContains(SimpleFormula

.java:352)

at com.anjlab.sat3.SimpleFormula.unionOrAdd(SimpleFormula.java:256)

at com.anjlab.sat3.Join0.tryJoin(JoinMethods.java:950)

at com.anjlab.sat3.Helper.joinTier(Helper.java:149)

at com.anjlab.sat3.Helper.createCTF(Helper.java:120)

at com.anjlab.sat3.Program.main(Program.java:160)

How about you try it with the following VM parameter:

-Xmx4096M (4G heap) or more

on a 64 bit jvm?

Suggestions:

The summary section, section 2, involves the silent transformation between tabular formula F and CTF F_1. It’s too early to explain that transformation, but it would be worthwhile to comment that there is a canonicalization of (a different) F into a set of CTFs F_i (with a forward reference to Table 1) and that F_1 is one member of that set.

It is never stated whether “clearing” is applied only once (potentially leaving new candidates for clearing) or repeatedly until all lines cannot be cleared.

I’d prefer the operation name “specialization” to “concretization”. It’s a common name for the operation of reducing some frame by restricting a variable to some element of its domain.

The permutations between Table 1 and Table 2 are mysterious. It would be *far* better to label the tableaux in Table 2 with the original variable names. For instance, the headers of F_2 should be {g, h, b, e, …}. Additionally useful would be to number the lines in the F tableau and reuse the same numbers in the F_i tableaux to indicate, for instance, that the two triples on the variables {h, b, e} have been reversed in the F_2 tableau.

Table 3 is presented in entirely the wrong place. Or, perhaps more accurately, the CTF -> CTS transformation is described in entirely the wrong place. A much better motivation would follow by placing the example tables and figures in the order in which they are produced by your algorithm, then putting the description of their production immediately prior to them and observations and notes about them immediately after. Otherwise, various sections of the paper are opaque (due to reference to text scattered irregularly through the prior text).

The observation of the 8-line limit in CTFs/CTSs would benefit from a note that a pair of matched tiers in a CTF/CTS pair exhaust the feasible subspace of the three involved variables.

Section 3 lacks any argument that the sequence of variable permutations is irrelevant, in the sense that unfortunate permutation choices produce subsequent pathological behaviour. It isn’t that the formation of the CTs may be pathological, but that an unfortunate set of CTs may cause exponentially many operations in subsequent stages of the algorithm. For instance can there be so many CTFs that there are so many CTSs that the union step of resolving pairs of CTSs involves the enumeration of exponentially many JS elements during the union? For instance, analysis of Z at the top of page 3 involves the enumeration of two feasible assignments. If there were fewer lines in Z, then more feasible assignments would be produced. What keeps this set of feasible assignments from growing excessively on loosely coupled problems?

The use of the Table 4 unified structure S_1 as the basis for the graph in Figure 1 appears circular because the basic graphs must be constructed prior to unification. Table 4 occurs much too early. Figure 1 should be based on the Table 3 S_1. (Otherwise, to be blunt, you’ve assumed your answer (the unified S_1) before computing it.)

The use of unions of substructures arriving at the same (hyper-)vertex has the same defect that naive interval arithmetic has — in this setting, non-solutions are retained in the JS. I don’t see any argument that the growth of retained non-solutions is limited.

Would “systematic effective procedure” be a better name?

It is inobvious which Table 3 S_i are combined to make each of the Table 5 S_i. Reusing the names exactly does not increase clarity.

By the time you get to Table 5, de-permuting the variables and sorting rows lexicographically would make the matching to form JS members by inspection more visually straightforward.

It’s somewhat suspicious that you appear to cite this paper in your references. Perhaps some indication in the title that this paper is different from the 2007 paper is in order. For instance, “Non-Orthodox Combinatorial Models Based on Discordant Structures II”.

It is said that the order of the algorithm is 0(N^4 m) , How large can be the values of “m” in terms of N ? … Is “m” a constant … If m is not a constant then the order of the algorithm becomes much bigger … Can you explain ?

‘m’ introduced on page 1, and also discussed on page 18, first paragraph.

Max number of unique triplets is: 8*n*(n – 1)*(n – 2)*(n – 3 + 1)/6;

Just for interest I looked in some example instances where ‘m’ is 5-10, sometime 100 times bigger than ‘n’.

You maybe correct, you may not be. But if you are sure that P=NP, never bother about a correct proof of P!=NP because I proved: P=NP iff P!=NP.

Best,

Rafee Kamouna.

It would be great if we could see some plot on which time of calculation (or number of operations… or any other real performance meter) is shown as a function of the input size. Of course the input should be related in some way so that every “next” instance (bigger) of the input is some expansion of the “previous instance” of the input. If there are more input instances with different sizes we could see a nice plot, and the graph will show the actual performance in practice, hopefully some nice curve (not exponential) can be seen…

Igor,

I agree with you. We have future plans to run such experiment with reference implementation. There is already a tool solve-batch in download package that one can use to solve multiple instances and it will produce aggregated results (file aggregated-results.tab) with time metrics. Its easy to plot graphs from it. Now the question is: where can we get example instances to solve in this experiment? It would be great if someone provide a set of 3-SAT instances that can be used in this experiment to get a good picture of performance.

For problems of variable difficulty it is perhaps nice to look at the problem instances used in the SAT09 competition. The instances vary from easy (solvable under 30s for all solvers in the competition) to hard (not solved by any within 1200s) and are delivered in cnf. I attempted to solve two of the hard problems (with -server -Xmx2048m) with unknown satisfiability, but my first run ran out of memory (instance gss-22-s100) before HSS creation. I killed the second instance (mulhs016) after interpolating that the HSS creation phase would require a week or two to finish.

I’m hoping to see an elaborate investigation of these problems, as they have already been subjected to the most competitive solvers in the game. Good luck with your research.

I figured “elaborate investigation of these problems” could be bit unclear, so to elaborate this; with “these problems ” I was referring to the cnf instances in the SAT09 competition. Additionally, with the realization that smaller benchmarks could deliver a better insight, it might also be a good idea to look at the instances in the SAT02 competition. A nice example are perhaps the smallest hardest problems where it took my consumer laptop 40 minutes to solve the UNSAT instance x1_3. Perhaps it is nice to see how solving x1_3 fairs nowadays.

I tried a really big CNF and it just hung after producing this output:

VarCount: 3; ClausesCount: 4; TiersCount: 1

18:17:08,943|129891 INFO [StopWatch] – Create CTF: 110019ms; overall: 117538ms

18:17:09,899|130847 INFO [Program] – CTF count: 2177

So I gave up and hit ctrl-c after an hour or so of no output.

Willow,

Reference implementation was not designed to solve big instances. Its aim were to illustrate the algorithm to readers who study the article. Thats why current implementation is a single threaded implementation that uses only one CPU and is written in Java (write once run anywhere) so that everybody can run it on their laptop computer not depending on operating system.

Try running instances with n=100. There’s also polynomial consumption of memory: V(HSS)=(k-1) * (8*(n-2)) * (3*8*(n-2)). Here V(HSS) – is a theoretical estimation of memory that HSS uses. In practice in current implementation we should multiply this number to some constant which is implementation overhead.

I briefly looked at your code. May I advice you to switch your code style to use functional and generic programming paradigms? I am not judging your work but do you believe many people will try to review complex algorithmic code (you stated: is a reference alghorithm) not even using all Java 1.5 features like generics, foreach loops or anonymous classes? This may mean switch to a more modern language, but java is not the only portable language/framework around (did you hear someone saying “python”?).

I found 2 errors (or, may be, my misunderstandings – briefly EMs).

1st of them kills the rest of work (if I’m correct).

EM1: Let’s unify a CTS set s1={0,0,0}, s2={1,0,0}, each of a single triplet. Applying a 1st transformation makes s2 empty, while set is satisfyable (e.g. by {1,1,1}).

EM2: decomposing 3-sat formula into CTF conjunction is trivial: use every single triplet as CTF.

With regards,

Winichenko M.Y.

VITI MEPHI prof.

Thats probably misunderstanding.

By the formula decomposition rules there may not be two different CTF that have any three variables with the same names in a single triplet.

So the EM2 statement is wrong in general case, hence EM1 cannot take place in your example.

do u mean

1I found 2 errors (or, may be, my misunderstandings – briefly EMs).

1st of them kills the rest of work (if I’m correct).

EM1: Let’s unify a CTS set s1={0,0,0}, s2={1,0,0}, each of a single triplet. Applying a 1st transformation makes s2 empty, while set is satisfyable (e.g. by {1,1,1}).

EM2: decomposing 3-sat formula into CTF conjunction is trivial: use every single triplet as CTF.

that unification process can be applied ONLY to the result of specific process of defined in the article decomposition? Has it (unification) any meaning in general case?

2 that u are seeking for ONLY specific decomposition? it’s not mentioned in article.

both points makes it too difficult reading & understanding the rest of text & is nonmodular in programmists’ sense ;)

& of course it MUST be mentioned when u define a notion – not afterwards…

& EM1 changes to: s1=(x1|x2|x3); s2=(!x1|x2|x4) with same result.

& EM2 changes to: “use as a CTF a conjunction of all triplets with the same variables”. Suppose it’s what u mean in article, but it mentions only ONE grouping, after which there’ll be “k CTFs”…

excuse me, last post was a typing error.

correction:

Do u mean

1 that unification process can be applied ONLY to the result of specific process of defined in the article decomposition? Has it (unification) any meaning in general case?

2 that u are seeking for ONLY specific decomposition? it’s not mentioned in article.

both points makes it too difficult reading & understanding the rest of text & is nonmodular in programmists’ sense ;)

& of course it MUST be mentioned when u define a notion – not afterwards…

& EM1 changes to: s1=(x1|x2|x3); s2=(!x1|x2|x4) with same result.

& EM2 changes to: “use as a CTF a conjunction of all triplets with the same variables”. Suppose it’ll be the result u mean in article, but u said of only ONE grouping, after which there’ll be k CTFs…

Sorry, I was wrong when saying that formula decomposition forbids variants when several CTF may not have any three variables with the same names in a single triplet. Thats not true. But that is the approach described in the article, and also used in the reference implementation. Of course, you can do any decomposition you want.

But, just want to make it clear:

In EM1 you show an example of CTS set s1={0,0,0}, s2={1,0,0}. Remember, that each CTS is a complement of CTF, i.e. your initial formula contains 14 clauses. And its not satisfiable.

The answer is no. You can apply unification to any decomposition. Just want to make it clear again, that you apply unification not to CTF set (which is result of formula decomposition), but to CTS set, where every CTS is a complement of CTF.

The goal of unification is to remove lines that are knowingly in contradiction and cannot be in JSS.

Article defines one possible way of obtaining decomposition. But the general analysis was made without assumptions about any specific decomposition.

I hope my comments make sense for you. I’m not very good at explaining things, so I’d be happy to explain things that don’t seem to make sense.

P.S.

I know Vladimir Romanov wrote you direct email with detailed answer to your questions in Russian. Please, let me know if you haven’t received it.

Hi,

I have also attempted an efficient k-SAT algorithm, and would like to help with connecting you with researchers and helping to fix the implementation. I am interested in emailing so that we can correspond with each other. Would you please be so kind as to send me an email so that I can send you information on connections that I know?

Compiled today and run on this CNF file (42 variables and 133 clauses): http://people.sc.fsu.edu/~jburkardt/data/cnf/hole6.cnf. It didn’t stop after 2 hours (so I pressed CTRL-C). Is it normal?

Perhaps the CNF instance is too big?

(minisat solved it in 0.02 sec)

42 variables and 133 clauses is not that big instance, actually. I think this should be solved in few minutes.

I cannot check this on my machine, because the link to file doesn’t work for me.

Also the solver should display progress in console. Can you paste this log here?

I copied the CNF file to:

http://www.bytegem.com/tmp/hole6.txt

Try if you can download from there.

Or I’ll try to put it on another server.

Part of the log:

I copied the file to:

http://www.bytegem.com/tmp/hole6.txt

Try if you can download it (or I’ll move it to another server).

Part of the log:

I tried to post this message twice (as a “reply”), but I don’t see it on the list of the comments … perhaps I’m doing something wrong.

I copied the file to:

http://www.bytegem.com/tmp/hole6.txt

Try if you can download it (or I’ll move it to another server).

Part of the log:

Okay, I downloaded the file. Looks like this is not 3-SAT instance. The solver converted it to 3-SAT automatically and we get 189 variables and 280 clauses.

This instance is bigger and it may take some time to solve it.

You can use ‘-a’ command line option to disable internal assertions and speed up the solving process.

I tried to run this instance on my machine and it took about 15 minutes to build HSS. However the HSS was built not empty which is not correct, because the formula is UNSAT. Looks like this is the same issue as we have on github: https://github.com/anjlab/sat3/issues#issue/5

Ok,

I read:

“Vladimir Romanov confirmed that his own implementation also built non-empty HSS for x1_16.shuffled.cnf”Do you think that it’s an implementation problem or a proof problem?

I’m not a real expert so if it is a proof problem I’ll suspend my investigations on the algorithm, otherwise I’ll still try to understand it (I’m a good (java) programmer) .

Best regards,

Vor

Thank you for your attention to my work. You’d better suspend your investigations.

A shortcoming possibly exists in the filtration procedure which requires an amendment.

Best regards,

V. R.

Dear Dmitry, dear Vladimir,

could you please provide references to your articles in Russian?

Best,

Dimitri Sverdlov

Dimitri,

here is the Russian version: http://zhurnal.ape.relarn.ru/articles/2007/143.pdf

Дмитрий, Ваше сообщение для меня очень интересно.

Я тоже (сдуру!!!) однажды залез в эту тему, и хотя

мои результаты пока сформировались в виде ощущений,

то, что NP == P – в этом я твёрдо убеждён.

Из всех других NPC задач SAT является самой

подготовленной для её разрешения – это вопрос ближайшего

времени. Возможно, именно поэтому результат Романова

замалчивается…

И увы… Ссылка на русскоязычное описание

http://zhurnal.ape.relarn.ru/articles/2007/143.pdf

почему-то недоступна.

Как ещё познакомится с содержанием ?

Желаю успехов.

А.Х.

Александр,

русский вариант статьи можно посмотреть здесь: https://sites.google.com/site/dmitrygusev/143.pdf

So it works? in P?? what SAT conference said?

Since you are coming late to the subject, I’ll summarise:

The paper, or at least its English language version (there’s a link to a Russian language version elsewhere in the comments) seems difficult to read and seems ambiguous on some points. Fortunately there is an example implementation which should clear up at least some ambiguities. I’ve downloaded but not understood the Russian language version of the paper and from its structure I conclude that the English language version was derived from it. Whether there are any

details in the Russian version that didn’t make it into the English version I cannot say, as I don’t understand the language.

The implementation on GitHub is an academically-oriented version that is not build for speed or to handle large instances. Compared to other SAT solvers on small to modest sized problems it runs slowly. It consumes considerable, but polynomial, amounts of memory. The algorithm seems to be in P, but in its present form, in both the implementation on GitHub and in Vladimir Romanov’s own code (not publicly available,) it is reported to fail on at least two instances, where it should classify its input as Unsatisfiable, but fails to properly do so. (Of course, in these instances it also fails to produce a satisfying set.) For more details see on GitHub in the “Issues” section “Non-empty HSS for UNSAT instance has been built.”

Nobody seems to know where the problem lies, and Prof. Romanov is presumably still working on it.

If anyone at SAT 2011 (I assume this is the conferencs you mention) got up and said that this proves that 3-SAT is in P, I’d say they were speaking prematurely. If you look at the dates on the messages reporting the problems, here and on GitHub, you should find that those problems were reported well before SAT 2011 started.

Thanks for feedback…

Please join the discussion about new version of the article:

http://romvf.wordpress.com/2013/09/25/development-of-the-concept/