Bias in the Vietnam War Draft?

In 1970 during the Vietnam War, the United States Selective Service conducted a lottery to decide which young men would be drafted into the armed forces (Fienberg, 1971). Each of the 366 birthdays in a year (including February 29) was paired with a draft number (from 1 to 366). The first people drafted were those born on the birthday that was paired with draft number 1. Everyone born on the day paired paired with draft number 2 was drafted next, and so on until the target number of draftees was reached.

In this lab, we will investigate whether there is evidence of systematic bias toward certain birthdays in the lottery process that produced the 1970 draft order .

The Data

We will regard the 366 dates of the year as the cases. Each date has two variables associated with it: DraftNumber (the position in the draft order of people with thta birthday), and SequentialDate in the year (with January 1 being sequential date 1, January 31 being sequential date 31, February 1 being sequential date 32, all the way to sequential date 366 which is December 31.

We can think of SequentialDate as a fixed explanatory variable, and DraftNumber as a random outcome which serves as the respone variable.

  1. In a perfectly fair, random lottery, what should be the value of the correlation coefficient between the DraftNumber and SequentialDate variables?

Exploring the Data

Let’s read in the data and look at a scatterplot with SequentialDate on the \(x\)-axis and DraftNumber on the \(y\)-axis.

  1. Does the scatterplot reveal much of an association between draft number and sequential date? Guess the value of the correlation coefficient. Does it appear that this was a fair, random lottery?

Spoiler: It’s difficult to see much of a pattern or association in the scatterplot, so it seems reasonable to conclude that this was a fair, random lottery with a correlation coefficient near zero. But let’s dig a little deeper…

Let’s split the data by month and look at the distribution of draft numbers within each month of birthdays.

  1. Looking at the boxplots, do you see any pattern across the months?

  2. The correlation coefficient between SequentialDate and DraftNumber is \(r = -0.226\). What does this mean in context? Is it consistent with the pattern you noticed in the last question?

  3. One possible explanation for the negative correlation for this particular drawing might be some form of systematic bias in the lottery process. Is some form of systematic bias the only explanation for a negative sample correlation? If not, what else could it be attributed to?

Setting Up a Hypothesis Test

If we want to determine whether a correlation like this one would be likely to occur by random chance in a fair lottery, we can do a hypothesis test.

  1. What are the null and alternative hypotheses, in words and as statements about a parameter?

  2. How could you simulate a dataset from a hypothetical world in which you know \(H_0\) to be true using pieces of paper (if you had enough of them)? What sample statistic would you record for each simulated sample to construct a randomization distribution?

  3. How would you go about evaluating the strength of the evidence against the null hypothesis and in favor of the alternative hypothesis, once you had your randomization distribution?

A small scale physical simulation

Simulating all 366 draft numbers by hand would take forever, but let’s simulate a miniature version of the draft with only 10 birthdays which will be paired with to 10 draft numbers.

You have ten playing cards numbered 1-10, where Ace is 1. These cards will represent our mini set of birthdays.

  1. Simulate a dataset by shuffling the cards and placing them face up on the table in a random order from left to right. The leftmost “birthday” is draft number 1, the second from the left is draft number 2, etc.

  2. Enter the data into R as follows, replacing the ...s in the SequentialDate variable with the cards you drew, in order:

Let’s plot your data and compute a correlation in the usual way:

  1. Now repeat your simulation and once again compute a correlation. Write down your two correlation values, then put them on the dot plot on the board.

  2. Based on a draft with just 10 birthdays, how surprising would a correlation of \(-0.226\) be? (That is, what proportion of the class’s simulated datasets yielded a correlation at least as large as that?)

Scaling up the Number of Simulated Samples

Instead of shuffling and dealing cards, we can quickly get several thousand simulated datasets from a world in which there is no association between SequentialDate and DraftNumber. Here’s some R code to do it 5000 times.

Let’s unpack this code a little bit. As with other simulations, we are first telling R to do() something 5000 times (with “times” represented by the * symbol; cute, huh?).

What are we doing in each iteration? Unlike with a sampling distribution or a bootstrap distribution, we’re not taking a random sample. Instead, we’re taking a dataset and randomly reordering (“shuffling”) one of the variables, in a way that has no relationship to the other variable. In this way we’re ensuring that the resulting dataset, while it might have a numerical correlation which isn’t zero, has no structural association between the two variables, and so any correlation that does arise is due purely to random chance.

In other words, the dataset we get most definitely arises from a world in which there is no “real” correlation between the variables (i.e., where the correlation parameter is exactly zero). But of course the correlation statistic (which we are computing in each iteration) will not be exactly zero.

Let’s plot the resulting randomization distribution of correlations:

We can even highlight the randomization correlations that lie beyond \(-0.226\).

and calculate what proportion of the simulated correlations fall in this category

This is our \(P\)-value!

  1. Using a significance level \(\alpha\) of 0.05, would you reject \(H_0\) if you had gotten a dataset with a correlation of \(0.226\) from a lottery with 10 birthdays? What does this decision mean in context?

Scaling up to the full set of birthdays

Of course, a draft with only 10 dates might produce different correlations than a draft with 366 dates. Let’s repeat this process with the actual data, which is in the DraftLottery data frame.

  1. Produce a randomization distribution based on the DraftLottery data, plot it (highlighting those randomization correlations which are at least as favorable to the alternative hypothesis as the \(-0.226\) that occurred in reality), compute the \(P\)-value, and interpret the results.

Aftermath

Once they saw these results, statisticians were quick to point out that something fishy happened with the 1970 draft lottery. The irregularity can be attributed to improper mixing of the balls used in the lottery drawing process (balls with birthdays early in the year were placed in the bin first, and balls with birthdays late in the year were placed in the bin last. Without thorough mixing, balls with birthdays late in the year settled near the top of the bin and so tended to be selected earlier.) The mixing process was changed for the 1971 draft lottery (e.g., two bins, one for the draft numbers and one for the birthdays), for which the correlation coefficient turned out to be \(r = 0.014\).

  1. What is the \(P\)-value associated with this correlation? Explain why we can use the same simulated values. Interpret the results in context.

Other Kinds of Associations

On an abstract level, many other kinds of hypothesis tests can be described as tests of an association. If we have a binary explanatory variable, for example, which divides our cases into two groups, asking whether the groups are different (in terms of either a proportion, which is based on a binary response variable, or a mean, which is based on a quantitative response variable) amounts to asking whether the explanatory variable and the response variable are associated.

Therefore, if we want to construct a randomization distribution for such a hypothesis test, we can do essentially the same thing we did with the draft data: shuffle one of the variables to create completely random pairings between explanatory and response; that is, to create completely random groupings of the outcomes. If the data came from an experiment, we can think of this shuffling as simulating random assignment of cases into the groups; but even for an observational study it is still sensible to do this.

Let’s repeat the simulation of the penguins and metal bands data, this time in R.

Look at how the data frame is structured by peeking at the data.

We can construct a randomization distribution the same way that we did for the DraftLottery data, but this time we will use a different statistic: rather than correlation (via the cor() function), we’ll compute the difference in proportions (via the diffprop() function). But every thing else (other than the dataset and variable names) is the same.

Note: the diffprop() function will compute the difference between two groups by putting the group names in alphabetical order and subtracting the second proportion from the first

  1. Perform the test of whether the long run survival rate would differ between penguins with and without metal bands, calculating a \(P\)-value and interpreting the results.

We can use the exact same procedure to test for a difference in means.

The SleepCaffeine dataset in the Lock5Data package comes from an experiment examining memorization ability for two groups of randomly assigned college students: one group took a 90 minute nap; the other had a pill containing an amount of caffeine comparable to a cup of coffee. The response was the number of words recalled from a list the students were asked to memorize.

  1. Carry out a hypothesis test to ask whether there is a difference in average recall ability (as summarized by the mean number of words recalled) between college students who take a 90 minute nap and college students who take a cup of coffee’s worth of caffeine. State the hypotheses, compute the sample statistic, construct and plot the randomization distribution, compute the \(P\)-value, and interpret the results in context. (Hint: the diffmean() function computes a difference in means between two groups. It has the same syntax as cor() and diffprop()).