A researcher is using a t test to compare the means of two groups. there are 25 participants

Today statistics provides the basis for inference in most medical research. Yet, for want of exposure to statistical theory and practice, it continues to be regarded as the Achilles heel by all concerned in the loop of research and publication – the researchers (authors), reviewers, editors and readers.

Most of us are familiar to some degree with descriptive statistical measures such as those of central tendency and those of dispersion. However, we falter at inferential statistics. This need not be the case, particularly with the widespread availability of powerful and at the same time user-friendly statistical software. As we have outlined below, a few fundamental considerations will lead one to select the appropriate statistical test for hypothesis testing. However, it is important that the appropriate statistical analysis is decided before starting the study, at the stage of planning itself, and the sample size chosen is optimum. These cannot be decided arbitrarily after the study is over and data have already been collected.

The great majority of studies can be tackled through a basket of some 30 tests from over a 100 that are in use. The test to be used depends upon the type of the research question being asked. The other determining factors are the type of data being analyzed and the number of groups or data sets involved in the study. The following schemes, based on five generic research questions, should help.[1]

Question 1: Is there a difference between groups that are unpaired? Groups or data sets are regarded as unpaired if there is no possibility of the values in one data set being related to or being influenced by the values in the other data sets. Different tests are required for quantitative or numerical data and qualitative or categorical data as shown in Fig. 1. For numerical data, it is important to decide if they follow the parameters of the normal distribution curve (Gaussian curve), in which case parametric tests are applied. If distribution of the data is not normal or if one is not sure about the distribution, it is safer to use non-parametric tests. When comparing more than two sets of numerical data, a multiple group comparison test such as one-way analysis of variance (ANOVA) or Kruskal-Wallis test should be used first. If they return a statistically significant p value (usually meaning p < 0.05) then only they should be followed by a post hoc test to determine between exactly which two data sets the difference lies. Repeatedly applying the t test or its non-parametric counterpart, the Mann-Whitney U test, to a multiple group situation increases the possibility of incorrectly rejecting the null hypothesis.

Question 2: Is there a difference between groups which are paired? Pairing signifies that data sets are derived by repeated measurements (e.g. before-after measurements or multiple measurements across time) on the same set of subjects. Pairing will also occur if subject groups are different but values in one group are in some way linked or related to values in the other group (e.g. twin studies, sibling studies, parent-offspring studies). A crossover study design also calls for the application of paired group tests for comparing the effects of different interventions on the same subjects. Sometimes subjects are deliberately paired to match baseline characteristics such as age, sex, severity or duration of disease. A scheme similar to Fig. 1is followed in paired data set testing, as outlined in Fig. 2. Once again, multiple data set comparison should be done through appropriate multiple group tests followed by post hoc tests.

Question 3: Is there any association between variables? The various tests applicable are outlined in Fig. 3. It should be noted that the tests meant for numerical data are for testing the association between two variables. These are correlation tests and they express the strength of the association as a correlation coefficient. An inverse correlation between two variables is depicted by a minus sign. All correlation coefficients vary in magnitude from 0 (no correlation at all) to 1 (perfect correlation). A perfect correlation may indicate but does not necessarily mean causality. When two numerical variables are linearly related to each other, a linear regression analysis can generate a mathematical equation, which can predict the dependent variable based on a given value of the independent variable.[2] Odds ratios and relative risks are the staple of epidemiologic studies and express the association between categorical data that can be summarized as a 2 × 2 contingency table. Logistic regression is actually a multivariate analysis method that expresses the strength of the association between a binary dependent variable and two or more independent variables as adjusted odds ratios.

Question 4: Is there agreement between data sets? This can be a comparison between a new screening technique against the standard test, new diagnostic test against the available gold standard or agreement between the ratings or scores given by different observers. As seen from Fig. 4, agreement between numerical variables may be expressed quantitatively by the intraclass correlation coefficient or graphically by constructing a Bland-Altman plot in which the difference between two variables x and y is plotted against the mean of x and y. In case of categorical data, the Cohen’s Kappa statistic is frequently used, with kappa (which varies from 0 for no agreement at all to 1 for perfect agreement) indicating strong agreement when it is > 0.7. It is inappropriate to infer agreement by showing that there is no statistically significant difference between means or by calculating a correlation coefficient.

Question 5: Is there a difference between time-to-event trends or survival plots? This question is specific to survival analysis[3](the endpoint for such analysis could be death or any event that can occur after a period of time) which is characterized by censoring of data, meaning that a sizeable proportion of the original study subjects may not reach the endpoint in question by the time the study ends. Data sets for survival trends are always considered to be non-parametric. If there are two groups then the applicable tests are Cox-Mantel test, Gehan’s (generalized Wilcoxon) test or log-rank test. In case of more than two groups Peto and Peto’s test or log-rank test can be applied to look for significant difference between time-to-event trends.

It can be appreciated from the above outline that distinguishing between parametric and non-parametric data is important. Tests of normality (e.g. Kolmogorov-Smirnov test or Shapiro-Wilk goodness of fit test) may be applied rather than making assumptions. Some of the other prerequisites of parametric tests are that samples have the same variance i.e. drawn from the same population, observations within a group are independent and that the samples have been drawn randomly from the population.

A one-tailed test calculates the possibility of deviation from the null hypothesis in a specific direction, whereas a two-tailed test calculates the possibility of deviation from the null hypothesis in either direction. When Intervention A is compared with Intervention B in a clinical trail, the null hypothesis assumes there is no difference between the two interventions. Deviation from this hypothesis can occur in favor of either intervention in a two-tailed test but in a one-tailed test it is presumed that only one intervention can show superiority over the other. Although for a given data set, a one-tailed test will return a smaller p value than a two-tailed test, the latter is usually preferred unless there is a watertight case for one-tailed testing.

It is obvious that we cannot refer to all statistical tests in one editorial. However, the schemes outlined will cover the hypothesis testing demands of the majority of observational as well as interventional studies. Finally one must remember that, there is no substitute to actually working hands-on with dummy or real data sets, and to seek the advice of a statistician, in order to learn the nuances of statistical hypothesis testing.

1. Parikh MN, Hazra A, Mukherjee J, Gogtay N, editors. Research methodology simplified: Every clinician a researcher. New Delhi: Jaypee Brothers; 2010. Hypothesis testing and choice of statistical tests; pp. 121–8. [Google Scholar]

2. Petrie A, Sabin C, editors. Medical statistics at a glance. 2 nd. London: Blackwell Publishing; 2005. The theory of linear regression and performing a linear regression analysis; pp. 70–3. [Google Scholar]

3. Wang D, Clayton T, Bakhai A. Analysis of survival data. In: Wang D, Bakhai A, editors. Clinical trials: A practical guide to design, analysis and reporting. London: Remedica; 2006. pp. 235–52. [Google Scholar]

According to the CDC, the mean height of U.S. adults ages 20 and older is about 66.5 inches (69.3 inches for males, 63.8 inches for females).

In our sample data, we have a sample of 435 college students from a single college. Let's test if the mean height of students at this college is significantly different than 66.5 inches using a one-sample t test. The null and alternative hypotheses of this test will be:

H0: µHeight = 66.5  ("the mean height is equal to 66.5")
H1: µHeight ≠ 66.5  ("the mean height is not equal to 66.5")

Before the Test

In the sample data, we will use the variable Height, which a continuous variable representing each respondent’s height in inches. The heights exhibit a range of values from 55.00 to 88.41 (Analyze > Descriptive Statistics > Descriptives).

Let's create a histogram of the data to get an idea of the distribution, and to see if  our hypothesized mean is near our sample mean. Click Graphs > Legacy Dialogs > Histogram. Move variable Height to the Variable box, then click OK.

To add vertical reference lines at the mean (or another location), double-click on the plot to open the Chart Editor, then click Options > X Axis Reference Line. In the Properties window, you can enter a specific location on the x-axis for the vertical line, or you can choose to have the reference line at the mean or median of the sample data (using the sample data). Click Apply to make sure your new line is added to the chart. Here, we have added two reference lines: one at the sample mean (the solid black line), and the other at 66.5 (the dashed red line).

From the histogram, we can see that height is relatively symmetrically distributed about the mean, though there is a slightly longer right tail. The reference lines indicate that sample mean is slightly greater than the hypothesized mean, but not by a huge amount. It's possible that our test result could come back significant.

Running the Test

To run the One Sample t Test, click Analyze > Compare Means > One-Sample T Test. Move the variable Height to the Test Variable(s) area. In the Test Value field, enter 66.5.

A researcher is using a t test to compare the means of two groups. there are 25 participants

Click OK to run the One Sample t Test.

Syntax

If you are using SPSS Statistics 27 or later:

T-TEST /TESTVAL=66.5 /MISSING=ANALYSIS /VARIABLES=Height /ES DISPLAY(TRUE) /CRITERIA=CI(.95).

If you are using SPSS Statistics 26 or earlier:

T-TEST /TESTVAL=66.5 /MISSING=ANALYSIS /VARIABLES=Height /CRITERIA=CI(.95).

Output

Tables

Two sections (boxes) appear in the output: One-Sample Statistics and One-Sample Test. The first section, One-Sample Statistics, provides basic information about the selected variable, Height, including the valid (nonmissing) sample size (n), mean, standard deviation, and standard error. In this example, the mean height of the sample is 68.03 inches, which is based on 408 nonmissing observations.

A researcher is using a t test to compare the means of two groups. there are 25 participants

The second section, One-Sample Test, displays the results most relevant to the One Sample t Test. 

A researcher is using a t test to compare the means of two groups. there are 25 participants

A Test Value: The number we entered as the test value in the One-Sample T Test window.

B t Statistic: The test statistic of the one-sample t test, denoted t. In this example, t = 5.810. Note that t is calculated by dividing the mean difference (E) by the standard error mean (from the One-Sample Statistics box).

C df: The degrees of freedom for the test. For a one-sample t test, df = n - 1; so here, df = 408 - 1 = 407.

D Significance (One-Sided p and Two-Sided p): The p-values corresponding to one of the possible one-sided alternative hypotheses (in this case, µHeight > 66.5) and two-sided alternative hypothesis (µHeight ≠ 66.5), respectively. In our problem statement above, we were only interested in the two-sided alternative hypothesis.

E Mean Difference: The difference between the "observed" sample mean (from the One Sample Statistics box) and the "expected" mean (the specified test value (A)). The sign of the mean difference corresponds to the sign of the t value (B). The positive t value in this example indicates that the mean height of the sample is greater than the hypothesized value (66.5).

F Confidence Interval for the Difference: The confidence interval for the difference between the specified test value and the sample mean.

Decision and Conclusions

Recall that our hypothesized population value was 66.5 inches, the [approximate] average height of the overall adult population in the U.S. Since p < 0.001, we reject the null hypothesis that the mean height of students at this college is equal to the hypothesized population mean of 66.5 inches and conclude that the mean height is significantly different than 66.5 inches.

Based on the results, we can state the following:

  • There is a significant difference in the mean height of the students at this college and the overall adult population in the U.S. (p < .001).
  • The average height of students at this college is about 1.5 inches taller than the U.S. adult population average (95% CI [1.013, 2.050]).