Menyelesaikan matriks korelasi 3x3: dua koefisien dari tiga yang diberikan

20

Saya ditanya pertanyaan ini dalam sebuah wawancara.

Katakanlah kita memiliki matriks korelasi dalam bentuk

[10.60.80.61γ0.8γ1]

Saya diminta untuk menemukan nilai gamma, mengingat matriks korelasi ini.
Saya pikir saya bisa melakukan sesuatu dengan nilai eigen, karena semuanya harus lebih besar dari atau sama dengan 0. (Matriks semidefinite positif) - tetapi saya tidak berpikir pendekatan ini akan menghasilkan jawaban. Saya melewatkan tipuan.

Bisakah Anda memberikan petunjuk untuk menyelesaikannya?

pemula
sumber
Komentar bukan untuk diskusi panjang; percakapan ini telah dipindahkan ke obrolan .
whuber
1
Pencarian situs ini mengarah langsung ke salah satu (beberapa) utas yang berisi formula yang relevan: stats.stackexchange.com/questions/5747 . Ada juga beberapa plot berguna dalam jawaban oleh felix s .
whuber

Jawaban:

21

Kita sudah tahu dibatasi antara [ - 1 , 1 ] Matriks korelasi harus semidefinit positif dan karenanya anak di bawah umur utamanya harus nonnegatifγ[1,1]

Jadi,

1(1γ2)0.6(0.60.8γ)+0.8(0.6γ0.8)0γ2+0.96γ0γ(γ0.96)0 and 1γ10γ0.96
hak cipta
sumber
4
@ perangkat Anda mungkin ingin membaca tentang Sylvester's Criterion
rightskewed
Jawaban yang bagus Saya akan menambahkan yang berikut ini: Cara populer untuk mendapatkan gamma adalah dengan berusaha menemukan gamma yang akan mengarah ke matriks korelasi norma nuklir terkecil (alias norma penggemar-k) yang dimungkinkan sambil menyelesaikan persamaan di atas. Untuk lebih lanjut, lihat "penyelesaian matriks," "penginderaan kompresif," atau lihat laporan ini pada topik bit.ly/2iwY1nW .
Mustafa S Eisa
1
Agar ini menjadi bukti, Anda perlu hasil ke arah lain: jika semua anak di bawah umur nontrivial dan matriks memiliki determinan 0 , maka matriks tersebut adalah semidefinite positif. >00
Federico Poloni
10

Inilah solusi yang lebih sederhana (dan mungkin lebih intuitif):

Pikirkan kovarians sebagai produk dalam daripada ruang vektor abstrak . Kemudian, entri dalam matriks korelasi yang untuk vektor v 1 , v 2 , v 3 , di mana braket sudut v i , v j menunjukkan sudut antara v i dan v j .cosvi,vjv1v2v3vi,vjvivj

Hal ini tidak sulit untuk memvisualisasikan bahwa dibatasi oleh | V 1 , v 2± v 1 , v 3| . Terikat pada kosinus nya ( γ ) demikian cos [v 1 , v 2± v 1 , v 3] . Trigonometri dasar kemudian memberikan gamma [ 0,6 ×v2,v3|v1,v2±v1,v3|γcos[v1,v2±v1,v3] .γ[0.6×0.80.6×0.8,0.6×0.8+0.6×0.8]=[0,0.96]

Edit: Perhatikan bahwa di baris terakhir benar-benar cos v 1 , v 2cos v 1 , v 3dosa v 1 , v 3dosa v 1 , v 2 - penampilan kedua 0,6 dan 0,8 terjadi secara kebetulan berkat 0,6 2 + 0,8 2 = 10.6×0.80.6×0.8cosv1,v2cosv1,v3sinv1,v3sinv1,v20.62+0.82=1.

yangle
sumber
1
+1, A legitimate geometric reasoning (saying it, I didn't check your computations nonetheless). This is exactly what I've proposed in comments to the question (unfortunately, all the comments were moved by moderator to chat, see the link above).
ttnphns
It seems to me you have "proven" that all correlations must be non-negative, because it appears your calculation will always give zero for the lower limit. If that's not the case, then could you elaborate on how your computation works in general? I really don't trust--or perhaps don't understand--your bound, because in three or more dimensions you can always find a v1 for which both v1v2=v1v3=0 and then your bound implies v2v3 is always zero! (cc @ttnphns)
whuber
@whuber: Sorry about the confusion. The calculation does not always give zero for the lower limit. I've amended my answer.
yangle
How do you respond to my last concern? It seems to indicate your bounds are incorrect.
whuber
@whuber: In your case, ⟨v1,v2⟩=⟨v1,v3⟩=π/2, hence the bound |⟨v1,v2⟩±⟨v1,v3⟩| is [0, π] as expected. The bound cos⟨v1,v2⟩cos⟨v1,v3⟩∓sin⟨v1,v3⟩sin⟨v1,v2⟩ on γ also works out to be [-1, 1].
yangle
4

Inilah yang saya maksudkan di komentar awal saya untuk jawaban dan apa yang saya rasakan @yangle mungkin bicarakan (walaupun saya tidak mengikuti / memeriksa perhitungan mereka).

rcosα=rxy=0.6cosβ=ryz=0.8. What might be the boundaries for cosγ=rxz? That correlation can take on any value defined by Z circumscribing about Y (keeping angle ryz=0.8 with it):

enter image description here

As it spins, two positions are remarkable as ultimate wrt X, both are when Z falls into the plane XY. One is between X and Y, and the other is on the opposite side of Y. These are shown by blue and red vectors. At both these positions exactly the configuration XYZ (correlation matrix) is singular. And these are the minimal and maximal angle (hence correlation) Z can attain wrt X.

Picking the trigonometric formula to compute sum or difference of angles on a plane, we have:

cosγ=rxyryz(1rxy2)(1ryz2)=[0,0.96] as the bounds.

This geometric view is just another (and a specific and simpler in 3D case) look on what @rightskewed expressed in algebraic terms (minors etc.).

ttnphns
sumber
If X,Y,Z are random variables, how do you map them to vectors in 3d space (They can only be vectors in 1d space). Also if the RV's are Nx1, then they will be vectors in N dimensional space?
novice
@novice Yes, they are initially 3 vectors in Nd space, but only 3 dimensions are nonredundant. Please follow the 2nd link in the answer and read further reference there to subject space where it is explained.
ttnphns
4

Playing around with principal minors may be fine on 3 by 3 or maybe 4 by 4 problems, but runs out of gas and numerical stability in higher dimensions.

For a single "free" parameter problem such as this, it's easy to see that that the set of all values making the matrix psd will be a single interval. Therefore, it is sufficient to find the minimum and maximum such values. This can easily be accomplished by numerically solving a pair of linear SemiDefinite Programming (SDP) problems:

  1. minimize γ subject to matrix is psd.
  2. maximize γ subject to matrix is psd.

For example, these problems can be formulated and numerically solved using YALMIP under MATLAB.

  1. gamma = sdpvar; A = [1 .6 .8;.6 1 gamma;.8 gamma 1]; optimize(A >= 0, gamma)
  2. optimize(A >= 0,-gamma)

Fast, easy, and reliable.

BTW, if the smarty pants interviewer asking the question doesn't know that SemiDefinite Programming, which is well-developed and has sophisticated and easy to use numerical optimizers for reliably solving practical problems, can be used to solve this problem, and many much more difficult variants, tell him/her that this is no longer 1870, and it's time to take advantage of modern computational developments.

Mark L. Stone
sumber
4

Let us consider the following convex set

{(x,y,z)R3:[1xyx1zyz1]O3}

which is a spectrahedron named 3-dimensional elliptope. Here's a depiction of this elliptope

enter image description here

Intersecting this elliptope with the planes defined by x=0.6 and by y=0.8, we obtain a line segment whose endpoints are colored in yellow

enter image description here

The boundary of the elliptope is a cubic surface defined by

det[1xyx1zyz1]=1+2xyzx2y2z2=0

If x=0.6 and y=0.8, then the cubic equation above boils down to the quadratic equation

0.96zz2=z(0.96z)=0

Thus, the intersection of the elliptope with the two planes is the line segment parametrized by

{(0.6,0.8,t)0t0.96}
Rodrigo de Azevedo
sumber
1

Every positive semi-definite matrix is a correlation/covariance matrix (and vice versa).

To see this, start with a positive semi-definite matrix A and take its eigen-decomposition (which exists by the spectral theorm, since A is symmetric) A=UDUT where U is a matrix of orthonormal eigenvectors and D is a diagonal matrix with eigen values on the diagonal. Then, let B=UD1/2UT where D1/2 is a diagonal matrix with the square root of eignevalues on the diagonal.

Then, take a vector with i.i.d. mean zero and variance 1 entries, x and note that Bx also has mean zero, and covariance (and correlation) matrix A.

Now, to see every correlation/covariance matrix is positive semi-definite is simple: Let R=E[xxT] be a correlation matrix. Then, R=RT is easy to see, and aTRa=E[(aTx)2]0 so the Rayleigh quotient is non-negative for any non-zero a so R is positive semi-definite.

Now, noting that a symmetric matrix is positive semi-definite if and only if its eigenvalues are non-negative, we see that your original approach would work: calculate the characteristic polynomial, look at its roots to see if they are non-negative. Note that testing for positive definiteness is easy with Sylvester's Criterion (as mentioned in another answer's comment; a matrix is positive definite if and only if the principal minors all have positive determinant); there are extensions for semidefinite (all minors have non-negative determinant), but you have to check 2n minors in this case, versus just n for positive definite.

Batman
sumber