[See updates below]
John Villasenor’s study on free speech attitudes among college students has received some attention.
In one of the questions he asked, only 47% of students favored, “an open learning environment where students are exposed to all types of speech and viewpoints, even if it means allowing speech that is offensive or biased against certain groups of people.” In contrast, 53% favored speech restrictions to, “create a positive learning environment.”
This is a huge swing from last year when Gallup asked the same question. They found that only 22% favored speech restrictions.
This 30-point shift could be because attitudes changed rapidly. Villasenor’s study was immediately after Charlottesville, for example, and students might be more primed to think about Nazi’s marching on their campus.
It could also be because of differences in survey methods. Surveying college students is really hard.
Here’s what Gallup did:
Results for the college student sample are based on telephone interviews with a random sample of 3,072 U.S. college students, aged 18 to 24, who are currently enrolled as full-time students at four-year colleges. Gallup selected a random sample of 240 U.S. four-year colleges, drawn from the Integrated Postsecondary Education Data System (IPEDS), that were strati ed by college enrollment size, public or private af liation, and region of the country. Gallup then contacted each sampled college in an attempt to obtain a sample of their students. Thirty-two colleges agreed to participate. The participating colleges were [long list of schools.] Gallup used random samples of 40% of each college’s student body, with one school providing a 32% sample, for its sample frame. The sample frame consisted of 54,806 college students from the 32 colleges. Gallup then emailed each sampled student to complete an Internet survey to confirm his or her eligibility for the study and to request a phone number where the student could be reached for a telephone interview. A total of 6,928 college students completed the Web survey, for a response rate of 13%. Of these, 6,814 students were eligible and provided a working phone number. Telephone interviews were conducted Feb. 29-March 15, 2016. The response rate for the phone survey was 49% using the American Association for Public Opinion Research’s RR-III calculation. The combined response rate for the Web recruit and telephone surveys was 6%.
Here’s how Villasenor describes his methodology:
Here is some more detailed information regarding the survey: This web survey of 1,500 undergraduate students at U.S. four-year colleges and universities was conducted between August 17 and August 31, 2017. [sentence about financing]. I designed the survey questions and then requested that UCLA contract with a vendor for the data collection.
There’s a lot of standard details missing in Villasenor description, but it does seem relevant that he was attempting to survey college students during the summer. I really wish my data collection was as easy as requesting my university hire some one to collect it.
Gallup on weighting:
The college student sample was weighted to correct for unequal selection probability and nonresponse. It was also weighted to match the demographics of U.S. colleges on enrollment, public or private affiliation, and region of the country, based on statistics from the IPEDS database, to ensure the sample is nationally representative of U.S. college students.
Villasenor on weighting:
I then performed the data analysis, including weighting. The survey results presented here have been weighted with respect to gender to adjust for the reported 57 percent/43 percent gender split among college students; by contrast, 70 percent (1,040 of the 1,500) of the survey respondents identified as female.
While Gallup is weighting on a range of variables, Villasenor relies largely on faith that his sample is representative.
Gallup on sampling error:
For results based on this sample of college students, the margin of sampling error is ±3 percentage points at the 95% confidence level.
Villasenor on sampling error:
To the extent that the demographics of the survey respondents (after weighting for gender) are probabilistically representative of the broader U.S. college undergraduate population, it is possible to estimate the margin of error in the tables above. For a confidence level of 95 percent, the margin of error is between approximately 2 percent and 6 percent—the margin of error is smaller for the categories with larger numbers of respondents (such as “All” category in the tables, which has 1500 respondents), and larger for the categories with smaller numbers of respondents (such as “Republicans”).
Gallup’s sample is twice as large as Villasenor’s and their margin of error is 50% larger. I suspect the difference is because Gallup is adjusted their errors for the complex nature of their sample, while Villasenor’s is assuming he has a probability sample and using standard formulas.
Villasenor is an
electoral electrical engineering professor with no apparent survey experience. He DIYed a study on a topic he was interested in. More people should do that, but we shouldn’t treat it like a valid research when the methodology varies from best practices by so much. At a minimum, Villasenor needs to explain more about the sample, data collection and weighting procedures so we can figure out how much of the change in attitudes represents a real shift in opinion.
I emailed John Villasenor to let him know about the post. With his permission, here is part of his response:
I will also mention here (it wasn’t mentioned in the Brookings post) that the percentage of students who report that they are of “Spanish, Hispanic or Latino origin or descent” (18% of respondents) is quite similar to the percentage in the overall U.S. college population).
It’s also encouraging that the geographic representation is so diverse, with respondents from 49 states and DC. In more detail, for example, in terms of the location of where the respondents graduated from high school, with respect to the most populous states: California has 12.1% of the overall US population and had about 11.9% of the respondents; Texas has 8.6% of the US population and had 7.5% of the respondents, Florida has 6.4% of the US population and had 7.1% of respondents. (I have data for all 49 states from which there were respondents; I’m just mentioning the top 3 in the preceding sentence.)
So, there are some good demographic indications, though of course there’s a long list of possible demographic factors to consider and it’s very difficult to get a respondent group that matches on every possible demographic factor.
One more quick point, based on my ~30 years of working with randomness, including degrees of randomness, in the context of engineering: I think there’s an opportunity for more nuance when discussing how good a sample is. In other words, there is an enormous variation in sample types/qualities – and it’s possible to say more, quantitatively, than is often said about samples if the math is done properly.
Guardian has more details on the survey. It was, “an opt-in online panel of people who identified as current college students.” The vendor is still a mystery.
One last update from Villasenor. The vendor is still a mystery!
At my request, UCLA contracted with (or issued a purchase order to; I don’t remember the specifics) the RAND Survey Research Group (SRG), which oversaw the actual data collection (and my understanding is that they in turn used a vendor to help get the panel). RAND provided me with the raw data and I did all of the analysis on that data; RAND SRG played no role in the analysis.
The IRB processing for this survey was done by RAND. There is an MOU between UCLA and RAND stating that when RAND does the IRB approval, there is no need for a separate IRB process within UCLA (search for “RAND” in the link below):
5 thoughts on “surveys are hard work”
“Electoral engineering professor.” Apropos.
LikeLiked by 2 people
Excellent comparison. Very helpful as a demonstration of context, method and change across time