F-test
From Wikipedia, the free encyclopedia
This article may require cleanup to meet Wikipedia's quality standards. Please improve this article if you can. (April 2008) |
This article is in need of attention from an expert on the subject. WikiProject Statistics or the Statistics Portal may be able to help recruit one. (November 2008) |
An F-test is any statistical test in which the test statistic has an F-distribution if the null hypothesis is true. The name was coined by George W. Snedecor, in honour of Sir Ronald A. Fisher. Fisher initially developed the statistic as the variance ratio in the 1920s.[1] Examples include:
- The hypothesis that the means of multiple normally distributed populations, all having the same standard deviation, are equal. This is perhaps the most well-known of hypotheses tested by means of an F-test, and the simplest problem in the analysis of variance (ANOVA).
- The hypothesis that a proposed regression model fits well. See Lack-of-fit sum of squares.
- The hypothesis that the standard deviations of two normally distributed populations are equal, and thus that they are of comparable origin.
Note that if it is equality of variances (or standard deviations) that is being tested, the F-test is extremely non-robust to non-normality. That is, even if the data displays only modest departures from the normal distribution, the test is unreliable and should not be used.
Contents |
[edit] Formula and calculation
The value of the test statistic used in an F-test consists of the ratio two different estimates of quantities which are the same according to the null hypothesis being tested. If the null hypothesis were true and if estimated values were not being used, this ratio would have a value of 1: however, because estimated values are used, F would sometimes be above or below 1. If the null hypothesis is not true the ratio would be rather different from 1. In the usual applications, statistical modelling assumptions are made founded on using the normal distribution to describe random errors and the estimates used in the ratio are statistically independent but are typically derived from the same data set.
In the case of multiple-comparison ANOVA problems, the F-test is used to test if the variance measuring the differences between groups in a certain pre-defined grouping of observations is large compared to the variance measuring the differences within the groups: a large value would tend to suggest that grouping is good or valid in some sense, or that there are real differences between the groups. The formula for an F-test is:
or:
where the quantities on the top and bottom of this ratio are each unbiased estimates of the within-group variance on the assumption that the between group variance is zero. Note that when there are only two groups for the F-test,
where t is the Student's t statistic.
In the case of regression: consider two models, 1 and 2, where model 1 is nested within model 2. That is, model 1 has p1 parameters, and model 2 has p2 parameters, where p2 > p1. (Any constant parameter in the model is included when counting the parameters. For instance, the simple linear model y = mx + b has p = 2 under this convention.) If there are n data points to estimate parameters of both models from, then F is[2]
where RSSi is the residual sum of squares of model i. If your regression model has been calculated with weights, then replace RSSi with χ2, the weighted sum of squared residuals. F here is distributed as an F-distribution, with (p2 − p1, n − p2) degrees of freedom; the probability that the decrease in χ2 associated with the addition of p2 − p1 parameters is solely due to chance is given by the probability associated with the F distribution at that point. The null hypothesis, that none of the additional p2 − p1 parameters differs from zero, is rejected if the calculated F is greater than the F given by the critical value of F for some desired rejection probability (e.g. 0.05).
[edit] Table on F-test
A table of F-test critical values can be found here and is usually included in most statistical texts.
[edit] One-way ANOVA example
a1 | a2 | a3 |
---|---|---|
6 | 8 | 13 |
8 | 12 | 9 |
4 | 9 | 11 |
5 | 11 | 8 |
3 | 6 | 7 |
4 | 8 | 12 |
a1, a2, and a3 are the three levels of the factor being studied. To calculate the F-ratio:
Step 1: calculate the Ai values where i refers to the number of the condition. So:
Step 2: calculate ȲAi being the average of the values of condition ai
Step 3 calculate these values:
- Total:
- Average overall score:
- where a = the number of conditions and n = the number of participants in each condition.
- This is every score in every condition squared and then summed.
Step 4 calculate the sum of squared terms:
- SSA = [A] − [T] = 84
- SSS / A = [Y] − [A] = 68
Step 5 the degrees of freedom are now calculated:
- dfa = a − 1 = 3 − 1 = 2
- dfS / A = a(n − 1) = 3(6 − 1) = 15
Step 6 the Means Squared Terms are calculated:
Step 7 finally the ending F-ratio is now ready:
Step 8 look up the Fcrit value for the problem:
Fcrit(2,15) = 3.68 at α = 0.05. Since F = 9.27 ≥ 3.68, the results are significant and one can reject the null hypothesis.
Note F(x, y) notation means that there are x degrees of freedom in the numerator and y degrees of freedom in the denominator.
[edit] Footnotes
- ^ Lomax, Richard G. (2007) "Statistical Concepts: A Second Course", p. 10, ISBN 0-8058-5850-4
- ^ GraphPad Software Inc (2007-10-11). "How the F test works to compare models". GraphPad Software Inc. http://www.graphpad.com/help/Prism5/prism5help.html?howtheftestworks.htm.
[edit] References
|