Apakah ada hasil yang berlawanan dengan intuisi dalam ilmu komputer teoretis?

30

Beberapa paradoks matematika dan logika bisa secara otomatis diterapkan pada komputer, tetapi apakah ada paradoks yang ditemukan dalam ilmu komputer itu sendiri?

Maksud saya, berlawanan dengan hasil intuitif yang terlihat seperti kontradiksi.

serg
sumber
2
Apakah Anda mencari hal-hal yang terasa paradoks atau ketidakkonsistenan nyata (misalnya paradoks Russell)?
Raphael
2
Saya tidak tahu tag yang cocok untuk pertanyaan ini, mungkin [gambaran besar] atau [pertanyaan lembut]. Bisakah Anda memberikan contoh paradoks matematika yang telah Anda sebutkan sehingga kami bisa tahu apa yang Anda bicarakan?
Kaveh
2
Jelas, tidak ada inkonsistensi yang dikenal dalam ilmu komputer --- itu akan mengkhawatirkan. Apakah Anda hanya mencari hasil yang berlawanan dengan intuisi? Apakah hasil seperti teorema PCP, teorema rekursi Kleene, dan cryptosystem kunci publik cukup aneh untuk dianggap sebagai paradoks bagi Anda?
Thomas
4
@erg, akan sangat membantu jika Anda bisa menjawab untuk mengklarifikasi pertanyaan Anda. Entah maksud Anda pertanyaan Anda dalam arti yang sangat "lunak" yang disarankan Thomas - dalam hal ini pertanyaan tersebut ditandai dengan benar sebagai gambaran besar dan jawaban saya di bawah ini di luar topik, atau maksud Anda dalam arti yang agak teknis ("aplikasi dan dampak paradoks logis dalam ilmu komputer ") dalam hal ini pertanyaan Anda harus ditandai dengan logika, bukan gambaran besar. Atau maksud Anda sesuatu yang sama sekali berbeda yang kita duga empat komentator!
Rob Simmons
4
Kontra-intuitif adalah fungsi waktu. Fakta bahwa begitu banyak pertanyaan yang berbeda semuanya lengkap-NP tidak diragukan lagi berlawanan dengan makalah Karp, seperti halnya fakta bahwa saluran memiliki kapasitas informasi yang pasti sebelum Shannon. Namun, sekarang orang terbiasa dengan hasil ini.
Peter Shor

Jawaban:

28

Saya menemukan fakta bahwa aliran jaringan adalah penghitung waktu polinomial intuitif. Tampaknya jauh lebih sulit pada tampilan pertama daripada banyak masalah NP-Hard. Atau dengan kata lain, ada banyak hasil di CS di mana waktu berjalan untuk menyelesaikannya jauh lebih baik daripada yang Anda harapkan.

Sariel Har-Peled
sumber
6
juga: Saya sudah meminta siswa mengomentari ketidaklancaran aliran jaringan, dan bahkan fakta bahwa pencocokan dapat dilakukan dalam waktu poli tampaknya sangat mengejutkan.
Suresh Venkat
9
I don't quite agree. Network flow can be easily reduced to linear programming so you are claiming that linear programming being in P is counterintuitive. Perhaps. But duality shows that LP is in NP and co-NP which at least suggests that it may not be that hard. What is less intuitive is that min-cut is solvable in P because it is not naturally a "fractional" problem.
Chandra Chekuri
21

P=NP implies EXPP/poly is one example of this, and this came to my mind from both Ketan Mulmuley's GCT work as well as Ryan Williams' recent result that again used an upper bound for CIRCUIT-SAT to prove a lower bound for NEXP in terms of ACC.

Suresh Venkat
sumber
Suresh, Please provide reference to Meyer's result.
Mohammad Al-Turkistany
1
I don't know if there's a direct reference. The Karp-Lipton paper (faculty.cs.tamu.edu/chen/courses/637/2008/pres/ashraf.pdf) credits Meyer with this result, but there's no citation.
Suresh Venkat
20

SAT has a polynomial-time algorithm only if P=NP. We don't know whether P=NP. However, I can write down an algorithm for SAT which is polynomial-time if P=NP is true. I don't know the correct reference for this, but the wikipedia page gives such an algorithm and credits Levin.

mikero
sumber
5
Similarly, we have a provably optimal algorithm for factoring that runs in polynomial time if factoring is in P, yet we do not know if factoring is in P (or how to analyze the runtime of this optimal function).
Ross Snider
9
This is typically referred to as "Levin universal search," and the correct reference is: L. Levin, Universal enumeration problems. Problems of Information Transmission, 9(3):265--266, 1973 (translated from Russian). This is the same paper in which Levin introduced NP-completeness (see also Cook & Karp, but as far as I know neither of them introduced the notion of an optimal universal search algorithm). The English translation can be found in Trakhtenbrot's famous survey: doi.ieeecomputersociety.org/10.1109/MAHC.1984.10036
Joshua Grochow
18

Computability certainly screws most students. A beautiful example with high confusion rate is this:

f(n):={1,π has 0n in its decimals0,else

Is f computable?

The answer is yes; see a discussion here. Most people immediately try constructing f with present knowledge. That can not work and leads to a perceived paradox which is really just subtleness.

Raphael
sumber
7
This to me seems like one of those problems where all of its trickiness is in how it's stated. This reminds me a bit of taking an algorithm, fiating that n is some constant and proclaiming that the algorithm now runs in constant time. The hard question people will typically think you're asking is whether we can write a program that will either prove pi contains a 0^n string for all n or that will determine the largest n for which it is true.
Joseph Garvin
4
Sure, but the fact they think like that does not illustrate trickiness the function's formulation but that people do not understand the difference between existence and construction.
Raphael
18

One surprising and counter intuitive result is that IP=PSPACE, proved using arithmetization around 1990.

As Arora & Barak put it (p. 157) "We know that interaction alone does not give us any languages outside NP. We also suspect that randomization alone does not add significant power to computation. So how much power could the combination of randomization and interaction provide?"

Apparently quite a bit!

Huck Bennett
sumber
13

As Philip said, Rice's theorem is a good example: one's intuition before studying computability is that there must surely be something we can compute about computations. It turns out that we can only compute something about some computations.

Max
sumber
13

How about Martin Escardo's publications showing that there are infinite sets that can be exhaustively searched over in finite time? See Escardo's guest blog post on Andrej Bauer's blog, for instance, on "Seemingly impossible functional programs".

Dominic Mulligan
sumber
12

The Recursion Theorem certainly seems counter-intuitive the first time you see it. Essentially it says that when you are describing a Turing Machine, you can assume it has access to its own description. In other words, I can build Turing Machines like:

TM M accepts n iff n is a multiple of the number of times "1" appears in the string representation of M.

TM N takes in a number n and outputs n copies of itself.

Note that the "string representation" here is not referring to the informal text description, but rather an encoding.

Mark Reitblatt
sumber
11

Proving information-theoretic results based on complexity-theoretic assumptions is another counter-intuitive result. For instance, Bellare et al. in their paper The (True) Complexity of Statistical Zero Knowledge constructively proved that, under the certified discrete log assumption, any language that admits honest-verifier statistical zero knowledge also admits statistical zero knowledge.

The result was so odd that it surprise the authors. They pointed out this fact several times; for instance, in the introduction:

Given that statistical zero-knowledge is a computationally independent notion, it is somewhat strange that properties about it could be proved under a computational intractability assumption.

PS: A stronger result was later proved unconditionally by Okamoto (On Relationships between Statistical Zero-Knowledge Proofs).

Description of some terms

Since the above result includes a lot of cryptographic jargon, I try to informally define each term.

  1. Certified discrete log assumption: It is hard (for poly-size circuits) to solve the discrete logarithm, even if the group prime (p) is certified; that is, the factorization of p1 is given.
  2. Zero knowledge: A protocol which yields no knowledge to polynomial-time bounded parties.
  3. Statistical zero knowledge: A protocol which yields no information, even to computationally unbounded parties, except with negligible probability.
  4. Honest-verifier zero knowledge: A protocol which yields no knowledge to polynomial-time bounded parties, if they act as specified by protocol.
M.S. Dousti
sumber
11

How about the fact that computing permanent is #P-Complete but computing determinant - a way weirder operation happens to be in the class NC?

This seems rather strange - it did not have to be that way (or maybe it did ;-) )

Akash Kumar
sumber
7

The linear programming problem is solvable in (weakly) polynomial time. This seems very surprising: why would we be able to find one among an exponential number of vertices of a high-dimensional polytope? Why would we be able to solve a problem which is so ridiculously expressive?

Not to mention all the exponential-size linear programs which we can solved by using the ellipsoid method and separation oracles, and other methods (adding variables, etc.). For example, it's amazing that an LP with an exponential number of variables such as the Karmakar-Karp relaxation of Bin Packing can be efficiently approximated.

Sasho Nikolov
sumber
2
The fact that there are exponential number of solutions is not unique to LP. Most discrete optimization problems have the same feature but they have poly-time algorithms, no? LP is a special case of convex optimization where a local optimum is a global optimum. We can also solve convex optimization modulo an epsilon issue due to irrationality and other technical reasons. For LP, due to the combinatorial structure, one can jump from this small error solution to a vertex which gives an exact solution. Equivalence of separation and optimization is surprising though.
Chandra Chekuri
2
@ChandraChekuri what I had in mind is that a high-dimensional geometric search problem sounds like it should be hard..but of course there are also good reasons why it's not (convexity). I should probably emphasize the equivalence of separation and optimization instead. Plenty of surprising consequences there, like solving hard optimization problems on perfect graphs, for example.
Sasho Nikolov
3

Whenever I teach automata, I always ask my students if they find it surprising that nondeterminism doesn't add any power to finite-state automata (i.e., that for every NFA is there is an equivalent -- possibly much larger -- DFA). About half the class reports being surprised, so there you go. [I myself have lost the "feel" for what is surprising at the intro level.]

Students definitely find it surprising at first that RRE. I challenge them to produce an algorithm that determines whether a given java program will halt, and they typically try to search for endless while loops. As soon as I show them ways of constructing loops whose termination is far from obvious, the surprise factor goes away.

Aryeh
sumber