Dari sudut pandang matematika / komputasi murni abstrak, (bagaimana) seseorang dapat menemukan atau alasan tentang masalah seperti 3-SAT, Subset Sum, Travelling Salesman dll.,? Apakah kita bahkan dapat menalarinya dengan cara yang bermakna hanya dengan sudut pandang fungsional ? Apakah itu mungkin?
Saya telah merenungkan pertanyaan ini murni dari sudut penyelidikan sendiri sebagai bagian dari pembelajaran model perhitungan kalkulus lambda. Saya mengerti bahwa ini "tidak intuitif" dan itulah mengapa Godel lebih menyukai model Turing. Namun, saya hanya ingin tahu apa saja keterbatasan teoretis yang diketahui dari gaya komputasi fungsional ini dan berapa banyak penghalang untuk menganalisis kelas masalah NP?
Jawaban:
Anda mungkin ingin melihat semantik biaya untuk bahasa fungsional . Ini adalah berbagai ukuran kompleksitas komputasi untuk bahasa fungsional yang tidak melewati segala jenis mesin Turing, mesin RAM, dll. Tempat yang baik untuk mulai mencari adalah posting Lambda the Ultimate ini , yang memiliki beberapa referensi lebih lanjut yang baik.
Bagian 7.4 dari Yayasan Praktis Bob Harper untuk Bahasa Pemrograman menjelaskan semantik biaya.
Makalah Pada kegunaan relatif bola api oleh Accattoli dan Coen menunjukkan bahwa -kalkulus memiliki paling banyak blowup linier sehubungan dengan model mesin RAM.λ
Singkatnya, di planet lain ini segalanya akan sama dengan NP, tetapi akan ada lebih sedikit buffer overflow, dan tidak akan ada banyak sampah yang berserakan.
sumber
At the request of Andrej and PhD, I am turning my comment into an answer, with apologies for self-advertising.
Saya baru-baru menulis sebuah makalah di mana saya melihat bagaimana membuktikan teorema Cook-Levin (N P -Kelengkapan SAT) menggunakan bahasa fungsional (varian dari λ-kalkulus) bukan mesin Turing. Ringkasan:
So the only thing that would change "on this side of the planet" is, perhaps, that we would have considered a λ-calculus-related problem, instead of Boolean-circuit-related problem, to be the "primordial"NP -complete problem.
A side note: the above-mentioned proof could be reformulated in a variant of Accattoli'sλ -calculus with linear explicit substitutions mentioned in Andrej's answer, which is perhaps more standard than the λ -calculus I use in my paper.
Later edit: my answer was just a bit more than a cut-and-paste from my comment and I realize that something more should be said concerning the heart of the question which, as I understand it, is: would it be possible to develop the theory ofNP -completeness without Turing machines?
I agree with Kaveh's comment: the answer is yes, but perhaps only restrospectively. That is, when it comes to complexity (counting time and space), Turing machines are unbeatable in simplicity, the cost model is self-evident for time and almost self-evident for space. In theλ -calculus, things are far less evident: time cost models as those mentioned by Andrej and given in Harper's book are from the mid-90s, space cost models are still almost non-existing (I am aware of essentially one work published in 2008).
So, of course we can do everything using the purely functional perspective, but it is hard to imagine an alternative universe in which Hartmanis and Stearns define complexity classes usingλ -terms and, 30 to 50 years later, people start adapting their work to Turing machines.
Then, as Kaveh points out, there is the "social" aspect: people were convinced thatNP -completeness is important because Cook proved that a problem considered to be central in a widely studied field (theorem proving) is NP -complete (or, in more modern terminology, using Karp reductions, coNP -complete). Although the above shows that this may be done in the λ -calculus, maybe it would not be the most immediate thing to do (I have my reserves on this point but let's not make this post too long).
Last but not least, it is worth observing that, even in my work mentioned above, when I show that HO CIRCUIT SAT may be reduced to CIRCUIT SAT, I do not explicitly show aλ -term computing the reduction and prove that it always normalizes in a polynomial number of leftmost reduction steps; I just show that there is an algorithm which, intuitively, may be implemented in polynomial time, just like any complexity theorist would not explicitly build a Turing machine and prove it polytime (which, let me say it, would be even crazier than writing down a λ -term, let alone check for mistakes).
This phenomenon is pervasive in logic and computability theory, complexity theory just inherits it: formal systems and models of computation are often used only to know that one can formalize intuitions; after that, intuition alone is almost always enough (as long as used carefully). So reasoning about the difficulty of solving problems like SAT, TAUT, SUBSET SUM, GI etc., and thus developing the theory ofNP -completeness, may largely be done independently of the underlying computational model, as long as a reasonable cost model is found. The λ -calculus, Turing machines or Brainfuck programs, it doesn't really matter if you know that your intuitions are sound. Turing machines gave an immediate, workable answer, and people didn't (and still do not) feel the need to go further.
sumber