Ketika menghitung matriks kovarians sampel, adakah yang dijamin mendapatkan matriks simetris dan pasti positif?
Saat ini masalah saya memiliki sampel 4600 vektor pengamatan dan 24 dimensi.
sampling
covariance
Morten
sumber
sumber
Jawaban:
Untuk sampel vektorxi=(xi1,…,xik)⊤ , dengan i=1,…,n , vektor mean sampel adalah
x¯=1n∑i=1nxi, dan matriks kovarian sampel adalah
Q=1n∑i=1n(xi−x¯)(xi−x¯)⊤.
Untuk vektor bukan-noly∈Rk , kita memiliki
y⊤Qy=y⊤(1n∑i=1n(xi−x¯)(xi−x¯)⊤)y
=1n∑i=1ny⊤(xi−x¯)(xi−x¯)⊤y
=1n∑i=1n((xi−x¯)⊤y)2≥0.(∗)
Therefore, Q is always positive semi-definite.
The additional condition forQ to be positive definite was given in whuber's comment bellow. It goes as follows.
Definezi=(xi−x¯) , for i=1,…,n . For any nonzero y∈Rk , (∗) is zero if and only if z⊤iy=0 , for each i=1,…,n . Suppose the set {z1,…,zn} spans Rk α1,…,αn y=α1z1+⋯+αnzn y⊤y=α1z⊤1y+⋯+αnz⊤ny=0 , yielding that y=0 , a contradiction. Hence, if the zi 's span Rk , then Q is positive definite. This condition is equivalent to rank[z1…zn]=k .
sumber
A correct covariance matrix is always symmetric and positive *semi*definite.
The covariance between two variables is defied asσ(x,y)=E[(x−E(x))(y−E(y))] .
This equation doesn't change if you switch the positions ofx and y . Hence the matrix has to be symmetric.
It also has to be positive *semi-*definite because:
You can always find a transformation of your variables in a way that the covariance-matrix becomes diagonal. On the diagonal, you find the variances of your transformed variables which are either zero or positive, it is easy to see that this makes the transformed matrix positive semidefinite. However, since the definition of definity is transformation-invariant, it follows that the covariance-matrix is positive semidefinite in any chosen coordinate system.
When you estimate your covariance matrix (that is, when you calculate your sample covariance) with the formula you stated above, it will obv. still be symmetric. It also has to be positive semidefinite (I think), because for each sample, the pdf that gives each sample point equal probability has the sample covariance as its covariance (somebody please verify this), so everything stated above still applies.
sumber
Variance-Covariance matrices are always symmetric, as it can be proven from the actual equation to calculate each term of said matrix.
Also, Variance-Covariance matrices are always square matrices of size n, where n is the number of variables in your experiment.
Eigenvectors of symmetric matrices are always orthogonal.
With PCA, you determine the eigenvalues of the matrix to see if you could reduce the number of variables used in your experiment.
sumber
I would add to the nice argument of Zen the following which explains why we often say that the covariance matrix is positive definite ifn−1≥k .
Ifx1,x2,...,xn are a random sample of a continuous probability distribution then x1,x2,...,xn are almost surely (in the probability theory sense) linearly independent.
Now, z1,z2,...,zn are not linearly independent because ∑ni=1zi=0 , but because of x1,x2,...,xn being a.s. linearly independent, z1,z2,...,zn a.s. span Rn−1 . If n−1≥k , they also span Rk .
To conclude, ifx1,x2,...,xn are a random sample of a continuous probability distribution and n−1≥k , the covariance matrix is positive definite.
sumber
For those with a non-mathematical background like me who don't quickly catch the abstract mathematical formulae, this is a worked out example excel for the most upvoted answer. The covariance matrix can be derived in other ways also.
sumber