A chi-squared test (symbolically represented as χ2) is basically a data analysis on the basis of observations of a random set of variables. Usually, it is a comparison of two statistical data sets. This test was introduced by Karl Pearson in 1900 for categorical data analysis and distribution. So it was mentioned as Pearson’s chi-squared test.
The chi-square test is used to estimate how likely the observations that are made would be, by considering the assumption of the null hypothesis as true.
A hypothesis is a consideration that a given condition or statement might be true, which we can test afterwards. Chi-squared tests are usually created from a sum of squared falsities or errors over the sample variance.
|Table of contents:|
When we consider, the null speculation is true, the sampling distribution of the test statistic is called as chi-squared distribution. The chi-squared test helps to determine whether there is a notable difference between the normal frequencies and the observed frequencies in one or more classes or categories. It gives the probability of independent variables.
P stands for probability here. To calculate the p-value, the chi-square test is used in statistics. The different values of p indicates the different hypothesis interpretation, are given below:
- P≤ 0.05; Hypothesis rejected
- P>.05; Hypothesis Accepted
Probability is all about chance or risk or uncertainty. It is the possibility of the outcome of the sample or the occurrence of an event. But when we talk about statistics, it is more about how we handle various data using different techniques. It helps to represent complicated data or bulk data in a very easy and understandable way. It describes the collection, analysis, interpretation, presentation, and organization of data. The concept of both probability and statistics is related to the chi-squared test.
The following are the important properties of the chi-square test:
- Two times the number of degrees of freedom is equal to the variance.
- The number of degree of freedom is equal to the mean distribution
- The chi-square distribution curve approaches the normal distribution when the degree of freedom increases.
The chi-squared test is done to check if there is any difference between the observed value and expected value. The formula for chi-square can be written as;
χ2 = ∑(Oi – Ei)2/Ei
Chi-Square Test of Independence
The chi-square test of independence also known as the chi-square test of association which is used to determine the association between the categorical variables. It is considered as a non-parametric test. It is mostly used to test statistical independence.
The chi-square test of independence is not appropriate when the categorical variables represent the pre-test and post-test observations. For this test, the data must meet the following requirements:
- Two categorical variables
- Relatively large sample size
- Categories of variables (two or more)
- Independence of observations
Example of Categorical Data
Let us take an example of a categorical data where there is a society of 1000 residents with four neighbourhoods, P, Q, R and S. A random sample of 650 residents of the society is taken whose occupations are doctors, engineers and teachers. The null hypothesis is that each person’s neighbourhood of residency is independent of the person’s professional division. The data are categorised as:
Assume the sample living in neighbourhood P, 150, to estimate what proportion of the whole 1,000 people live in neighbourhood P. In the same way, we take 349/650 to calculate what ratio of the 1,000 are doctors. By the supposition of independence under the hypothesis, we should “expect” the number of doctors in neighbourhood P is;
150 x 349/650 ≈ 80.54
So by the chi-square test formula for that particular cell in the table, we get;
(Observed – Expected)2/Expected Value = (90-80.54)2/80.54 ≈ 1.11
Some of the exciting facts about the Chi-square test are given below:
The Chi-square statistic can only be used on numbers. We cannot use them for data in terms of percentages, proportions, means or similar statistical contents. Suppose, if we have 20% of 400 people, we need to convert it to a number, i.e. 80, before running a test statistic.
A chi-square test will give us a p-value. The p-value will tell us whether our test results are significant or not.
However, to perform a chi-square test and get the p-value, we require two pieces of information:
(1) Degrees of freedom. That’s just the number of categories minus 1.
(2) The alpha level(α). You or the researcher chooses this. The usual alpha level is 0.05 (5%), but you could also have other levels like 0.01 or 0.10.
In elementary statistics, we usually get questions along with the degrees of freedom(DF) and the alpha level. Thus, we don’t usually have to figure out what they are. To get the degrees of freedom, count the categories and subtract 1.
The chi-square distribution table with three probability levels is provided here. The statistic here is used to examine whether distributions of certain variables vary from one another. The categorical variable will produce data in the categories and numerical variables will produce data in numerical form.
The distribution of χ2 with (r-1)(c-1) degrees of freedom(DF), is represented in the table given below. Here, r represents the number of rows in the two-way table and c represents the number of columns.
Value of P
A survey on cars had conducted in 2011 and determined that 60% of car owners have only one car, 28% have two cars, and 12% have three or more. Supposing that you have decided to conduct your own survey and have collected the data below, determine whether your data supports the results of the study.
Use a significance level of 0.05. Also, given that, out of 129 car owners, 73 had one car and 38 had two cars.
Let us state the null and alternative hypotheses.
H0: The proportion of car owners with one, two or three cars is 0.60, 0.28 and 0.12 respectively.
H1: The proportion of car owners with one, two or three cars does not match the proposed model.
A Chi-Square goodness of fit test is appropriate because we are examining the distribution of a single categorical variable.
Let’s tabulate the given information and calculate the required values.
|Observed (Oi)||Expected (Ei)||Oi – Ei||(Oi – Ei)2||(Oi – Ei)2/Ei|
|One car||73||0.60 × 129 = 77.4||-4.4||19.36||0.2501|
|Two cars||38||0.28 × 129 = 36.1||1.9||3.61||0.1|
|Three or more cars||18||0.12 × 129 = 15.5||2.5||6.25||0.4032|
Therefore, χ2 = ∑(Oi – Ei)2/Ei = 0.7533
Let’s compare it to the chi-square value for the significance level 0.05.
The degrees for freedom = 3 – 1 = 2
Using the table, the critical value for a 0.05 significance level with df = 2 is 5.99.
That means that 95 times out of 100, a survey that agrees with a sample will have a χ2 value of 5.99 or less.
The Chi-square statistic is only 0.7533, so we will accept the null hypothesis.
Frequently Asked Questions – FAQs
What is the chi-square test write its formula?
χ^2 = ∑(O_i – E_i)^2/E_i
O_i = Observed value
E_i = Expected value
How do you calculate chi squared?
χ^2 = ∑(O_i – E_i)^2/E_i
This can be done as follows.
For each observed number in the data, subtract the corresponding expected value, i.e. (O — E).
Square the difference, (O — E)^2
Divide these squares by the expected value of each observation, i.e. [(O – E)^2 / E].
Finally, take the sum of these values.
Thus, the obtained value will be the chi-squared statistic.