Saya mencoba memahami intuisi di balik kernel SVM. Sekarang, saya mengerti bagaimana linear kerja SVM, di mana garis keputusan dibuat yang membagi data sebaik mungkin. Saya juga memahami prinsip di balik porting data ke ruang dimensi yang lebih tinggi, dan bagaimana hal ini dapat membuatnya lebih mudah untuk menemukan garis keputusan linier di ruang baru ini. Apa yang saya tidak mengerti adalah bagaimana kernel digunakan untuk memproyeksikan poin data ke ruang baru ini.
Apa yang saya ketahui tentang sebuah kernel adalah ia secara efektif mewakili "kesamaan" antara dua titik data. Tetapi bagaimana hubungannya dengan proyeksi?
machine-learning
svm
kernel-trick
Karnivaurus
sumber
sumber
Jawaban:
Marih(x) menjadi proyeksi untuk ruang dimensi tinggi F . Pada dasarnya fungsi kernel K(x1,x2)=⟨h(x1),h(x2)⟩ , yang merupakan bagian-produk. Jadi itu tidak digunakan untuk memproyeksikan poin data, tetapi lebih merupakan hasil dari proyeksi. Ini dapat dianggap sebagai ukuran kesamaan, tetapi dalam SVM, itu lebih dari itu.
Optimalisasi untuk menemukan hyperplane pemisah terbaik dalam melibatkan h ( x ) hanya melalui bentuk produk-dalam. Dengan kata lain, jika Anda tahu K ( ⋅ , ⋅ ) , Anda tidak perlu tahu bentuk persis h ( x ) , yang membuat optimisasi lebih mudah.F h(x) K(⋅,⋅) h ( x )
Setiap kernel memiliki h ( x ) yang sesuai. Jadi jika Anda menggunakan SVM dengan kernel itu, maka Anda secara implisit menemukan garis keputusan linier di ruang yang h ( x ) petakan.K( ⋅ , ⋅ ) h(x) h ( x )
Bab 12 dari Elemen Pembelajaran Statistik memberikan pengantar singkat untuk SVM. Ini memberikan lebih banyak detail tentang koneksi antara kernel dan pemetaan fitur: http://statweb.stanford.edu/~tibs/ElemStatLearn/
sumber
Properti berguna dari kernel SVM tidak universal - mereka bergantung pada pilihan kernel. Untuk mendapatkan intuisi, sangat membantu untuk melihat salah satu kernel yang paling umum digunakan, kernel Gaussian. Hebatnya, kernel ini mengubah SVM menjadi sesuatu yang sangat mirip dengan classifier tetangga k-terdekat.
Jawaban ini menjelaskan hal berikut:
1. Mencapai pemisahan sempurna
Pemisahan sempurna selalu dimungkinkan dengan kernel Gaussian karena sifat lokalitas kernel, yang mengarah pada batas keputusan yang fleksibel dan sewenang-wenang. Untuk bandwidth kernel yang cukup kecil, batas keputusan akan terlihat seperti Anda hanya menggambar lingkaran kecil di sekitar titik kapan pun mereka diperlukan untuk memisahkan contoh positif dan negatif:
(Kredit: kursus pembelajaran mesin online Andrew Ng ).
Jadi, mengapa ini terjadi dari perspektif matematika?
Pertimbangkan pengaturan standar: Anda memiliki kernel Gaussian dan data pelatihan ( x ( 1 ) , y ( 1 ) ) , ( x ( 2 ) , y ( 2 ) ) , … , ( x ( n ) ,K( x , z ) = exp( - | | x - z | |2/ σ2) dimana y ( i ) nilai-nilai ± 1 . Kami ingin mempelajari fungsi classifier( x( 1 ), y( 1 )) , ( x( 2 ), y( 2 )) , … , ( X( n ), y( n )) y(i) ±1
Sekarang bagaimana kita pernah menetapkan bobot ? Apakah kita memerlukan ruang dimensi tak terbatas dan algoritma pemrograman kuadratik? Tidak, karena saya hanya ingin menunjukkan bahwa saya dapat memisahkan poin dengan sempurna. Jadi saya membuat σ satu miliar kali lebih kecil dari pemisahan terkecil |, | x ( i ) - x ( j ) | | antara dua contoh pelatihan, dan saya baru saja menetapkan w i = 1 . Ini berarti bahwa semua poin pelatihan adalah miliar sigmas terpisah sejauh kernel yang bersangkutan, dan setiap titik benar-benar mengontrol tanda ywi σ ||x(i)−x(j)|| wi=1 y^ di lingkungannya. Secara formal, kami punya
whereϵ is some arbitrarily tiny value. We know ϵ is tiny because x(k) is a billion sigmas away from any other point, so for all i≠k we have
Sinceϵ is so small, y^(x(k)) definitely has the same sign as y(k) , and the classifier achieves perfect accuracy on the training data. In practice this would be terribly overfitting but it shows the tremendous flexibility of the Gaussian kernel SVM, and how it can act very similar to a nearest neighbor classifier.
2. Kernel SVM learning as linear separation
The fact that this can be interpreted as "perfect linear separation in an infinite dimensional feature space" comes from the kernel trick, which allows you to interpret the kernel as an abstract inner product some new feature space:
whereΦ(x) is the mapping from the data space into the feature space. It follows immediately that the y^(x) function as a linear function in the feature space:
where the linear functionL(v) is defined on feature space vectors v as
This function is linear inv because it's just a linear combination of inner products with fixed vectors. In the feature space, the decision boundary y^(x)=0 is just L(v)=0 , the level set of a linear function. This is the very definition of a hyperplane in the feature space.
3. How the kernel is used to construct the feature space
Kernel methods never actually "find" or "compute" the feature space or the mappingΦ explicitly. Kernel learning methods such as SVM do not need them to work; they only need the kernel function K . It is possible to write down a formula for Φ but the feature space it maps to is quite abstract and is only really used for proving theoretical results about SVM. If you're still interested, here's how it works.
Basically we define an abstract vector spaceV where each vector is a function from X to R . A vector f in V is a function formed from a finite linear combination of kernel slices:
The inner product on the space is not the ordinary dot product, but an abstract inner product based on the kernel:
This definition is very deliberate: its construction ensures the identity we need for linear separation,⟨Φ(x),Φ(y)⟩=K(x,y) .
With the feature space defined in this way,Φ is a mapping X→V , taking each point x to the "kernel slice" at that point:
You can prove thatV is an inner product space when K is a positive definite kernel. See this paper for details.
sumber
For the background and the notations I refer to How to calculate decision boundary from support vectors?.
So the features in the 'original' space are the vectorsxi , the binary outcome yi∈{−1,+1} and the Lagrange multipliers are αi .
As said by @Lii (+1) the Kernel can be written asK(x,y)=h(x)⋅h(y) ('⋅ ' represents the inner product.
I will try to give some 'intuitive' explanation of what thish looks like, so this answer is no formal proof, it just wants to give some feeling of how I think that this works. Do not hesitate to correct me if I am wrong.
I have to 'transform' my feature space (so myxi ) into some 'new' feature space in which the linear separation will be solved.
For each observationxi , I define functions ϕi(x)=K(xi,x) , so I have a function ϕi for each element of my training sample. These functions ϕi span a vector space. The vector space spanned by the ϕi , note it V=span(ϕi,i=1,2,…N) .
I will try to argue that is the vector space in which linear separation will be possible. By definition of the span, each vector in the vector spaceV can be written as as a linear combination of the ϕi , i.e.: ∑Ni=1γiϕi , where γi are real numbers.
The transformation, that maps my original feature space toV is defined as
This mapΦ maps my original feature space onto a vector space that can have a dimension that goed up to the size of my training sample.
Obviously, this transformation (a) depends on the kernel, (b) depends on the valuesxi in the training sample and (c) can, depending on my kernel, have a dimension that goes up to the size of my training sample and (d) the vectors of V look like ∑Ni=1γiϕi , where γi , γi are real numbers.
Looking at the functionf(x) in How to calculate decision boundary from support vectors? it can be seen that f(x)=∑iyiαiϕi(x)+b .
In other words,f(x) is a linear combination of the ϕi and this is a linear separator in the V-space : it is a particular choice of the γi namely γi=αiyi !
Theyi are known from our observations, the αi are the Lagrange multipliers that the SVM has found. In other words SVM find, through the use of a kernel and by solving a quadratic programming problem, a linear separation in the V -spave.
This is my intuitive understanding of how the 'kernel trick' allows one to 'implicitly' transform the original feature space into a new feature spaceV , with a different dimension. This dimension depends on the kernel you use and for the RBF kernel this dimension can go up to the size of the training sample.
So kernels are a technique that allows SVM to transform your feature space , see also What makes the Gaussian kernel so magical for PCA, and also in general?
sumber
Let me explain it. The kernel trick is the key here. Consider the case of a Radial Basis Function (RBF) Kernel here. It transforms the input to infinite dimensional space. The transformation of inputx to ϕ(x) can be represented as shown below (taken from http://www.csie.ntu.edu.tw/~cjlin/talks/kuleuven_svm.pdf)
The input space is finite dimensional but the transformed space is infinite dimensional. Transforming the input to an infinite dimensional space is something that happens as a result of the kernel trick. Herex which is the input and ϕ is the transformed input. But ϕ is not computed as it is, instead the product ϕ(xi)Tϕ(x) is computed which is just the exponential of the norm between xi and x .
There is a related question Feature map for the Gaussian kernel to which there is a nice answer /stats//a/69767/86202.
The output or decision function is a function of the kernel matrixK(xi,x)=ϕ(xi)Tϕ(x) and not of the input x or transformed input ϕ directly.
sumber
Mapping to a higher dimension is merely a trick to solve a problem that is defined in the original dimension; so concerns such as overfitting your data by going into a dimension with too many degrees of freedom are not a byproduct of the mapping process, but are inherent in your problem definition.
Basically, all that mapping does is converting conditional classification in the original dimension to a plane definition in the higher dimension, and because there is a 1 to 1 relationship between the plane in the higher dimension and your conditions in the lower dimension, you can always move between the two.
Taking the problem of overfitting, clearly, you can overfit any set of observations by defining enough conditions to isolate each observation into its own class, which is equivalent of mapping your data to (n-1)D where n is the number of your observations.
Taking the simplest problem, where your observations are [[1,-1], [0,0], [1,1]] [[feature, value]], by moving into the 2D dimension and separating your data with a line, your are simply turning the conditional classification of
feature < 1 && feature > -1 : 0
to defining a line that passes through(-1 + epsilon, 1 - epsilon)
. If you had more data points and needed more condition, you just needed to add one more degree of freedom to your higher dimension by each new condition that your define.You can replace the process of mapping to a higher dimension with any process that provides you with a 1 to 1 relationship between the conditions and the degrees of freedom of your new problem. Kernel tricks simply do that.
sumber
[x, floor(sin(x))]
. Mapping your problem into a 2D dimension is not helpful here at all; in fact, mapping to any plane will not be helpful here, which is because defining the problem as a set ofx < a && x > b : z
is not helpful in this case. The simplest mapping in this case is mapping into a polar coordinate, or into the imaginary plane.