LATAR BELAKANG: Lewati dengan aman - ada di sini untuk referensi, dan untuk melegitimasi pertanyaan.
Pembukaan makalah ini berbunyi:
"Uji kontingensi chi-square Karl Pearson yang terkenal berasal dari statistik lain, yang disebut statistik z, berdasarkan pada distribusi Normal. Versi paling sederhana dari dapat ditunjukkan identik secara matematis dengan uji z yang setara. Tes ini menghasilkan hasil yang sama dalam semua keadaan. Untuk semua maksud dan tujuan "chi-squared" dapat disebut "z-squared". Nilai kritis untuk satu derajat kebebasan adalah kuadrat dari nilai kritis z yang sesuai. "
Ini telah dinyatakan beberapa kali dalam CV (di sini , di sini , di sini dan yang lainnya).
Dan memang kita bisa membuktikan bahwa setara dengandengan:
Mari mengatakan bahwa dan dan menemukan kepadatan dengan menggunakan metode:
. Masalahnya adalah kita tidak dapat berintegrasi dalam kepadatan dekat dari distribusi normal. Tetapi kita dapat mengungkapkannya:
Mengambil turunan:
Karena nilai-nilai normal simetris:
. Menyamakan ini kepdfnormal (sekarangxdalampdfakan√ untuk dicolokkan kee - x 2 part of the normal ); and remembering to in include at the end:
Compare to the pdf of the chi square:
Since , for df, we have derived exactly the of the chi square.
Further, if we call the function prop.test()
in R we are invoking the same test as if we decide upon chisq.test()
.
THE QUESTION:
So I get all these points, yet I still don't know how they apply to the actual implementation of these two tests for two reasons:
A z-test is not squared.
The actual test statistics are completely different:
The value of the test-statistic for a is:
where
= Pearson's cumulative test statistic, which asymptotically approaches a distribution. = the number of observations of type ; = total number of observations; = = the expected (theoretical) frequency of type , asserted by the null hypothesis that the fraction of type in the population is ; = the number of cells in the table.
On the other hand, the test statistic for a -test is:
with , where and are the number of "successes", over the number of subjects in each one of the levels of the categorical variables, i.e. and .
This formula seems to rely on the binomial distribution.
These two tests statistics are clearly different, and result in different results for the actual test statistics, as well as for the p-values: 5.8481
for the and 2.4183
for the z-test, where (thank you, @mark999). The p-value for the test is 0.01559
, while for the z-test is 0.0077
. The difference explained by two-tailed versus one-tailed: (thank you @amoeba).
So at what level do we say that they are one and the same?
sumber
chisq.test()
, have you tried usingcorrect=FALSE
?Jawaban:
Let us have a 2x2 frequency table where columns are two groups of respondents and rows are the two responses "Yes" and "No". And we've turned the frequencies into the proportions within group, i.e. into the vertical profiles:
The usual (not Yates corrected)χ2 of this table, after you substitute proportions instead of frequencies in its formula, looks like this:
Remember thatp=n1p1+n2p2n1+n2 , the element of the weighted average profile of the two profiles
(p1,q1)
and(p2,q2)
, and plug it in the formula, to obtainDivide both numerator and denominator by the(n21n2+n1n22) and get
the squared z-statistic of the z-test of proportions for "Yes" response.
Thus, the
2x2
homogeneity Chi-square statistic (and test) is equivalent to the z-test of two proportions. The so called expected frequencies computed in the chi-square test in a given column is the weighted (by the groupn
) average vertical profile (i.e. the profile of the "average group") multiplied by that group'sn
. Thus, it comes out that chi-square tests the deviation of each of the two groups profiles from this average group profile, - which is equivalent to testing the groups' profiles difference from each other, which is the z-test of proportions.This is one demonstration of a link between a variables association measure (chi-square) and a group difference measure (z-test statistic). Attribute associations and group differences are (often) the two facets of the same thing.
(Showing the expansion in the first line above, By @Antoni's request):
sumber