Resource Centers

Tests of Significance

Why are tests of significance needed?

  • Tests of significance are statistical tools that help us make decisions about changes to responses (process outputs).
  • Without these tools, we might look at a change in a process output and think that it is important, but the change was just part of the common cause variation from the process.
  • Tests of significance give us a statistical basis for determining if a change in factor levels leads to a statistically significant effect on the process response.
  • While tests of significance can be standalone statistical tools, they serve as the backbone of ANOVA (analysis of variance) and of the analysis of the results from designed experiments.

α and β Risks

  • Whenever we make statistics-based decisions, we have to accept some risk in our assessments.
  • There are two types of risks we face.
    • We can make a mistake in saying results are different when they are actually the same. This is an α (alpha) risk.
    • A β (beta) risk occurs when we say that results are the same when they are actually different.
  • With tests of significance, a 5% α risk is typically used.
    • We can place all of the risk on one-tail when testing for a change in one direction, or we can divide the risk over two-tails when testing for any type of difference.

Degrees of Freedom

  • Besides the α risk, there is second term that we need to use with tests of significance, the Degrees of Freedom.
  • The degrees of freedom, or df, are the number of independent values we have in a calculation. Typically, this is the number of values associated with the calculation minus 1.

Hypothesis Testing

  • Hypothesis testing is an important concept needed for both tests of significance and design of experiments. A hypothesis is an assumption about the outcome of the test or experiment.
  • If a hypothesis is rejected, it means that the data available are sufficient to conclude that the hypothesis is false.
  • However, if the hypothesis is accepted, we can say that the data are sufficient to conclude that the hypothesis is not false but not necessarily that the hypothesis has been proven true.

Types of Tests of Significance

  • There are four major types of significance tests. The Z-test and t-test look at differences in the mean values and the chi-squared and F-tests look at differences in variances.
  • With experimental designs, we use the tests of significance for samples, the t-test and the F-test, not the tests for populations.

t-Tests

  • The t-test can give a statistical basis for whether a sample is from a population or whether multiple samples indicate their populations are equal.

F-Tests

  • The F-test investigates whether two populations are equal based on the variances of two samples from those populations.