Healthy Skepticism

Phone Survey

Brittany McQueer

Think about the last time you participated in a survey? Was it for fun in an online forum like Buzzfeed, health based in a doctor's office, or some other reason such as for the National Census Bureau? Did you stop to think about the research qualifications of the person administering the survey on the other end of the phone, the team that developed the questions, or the algorithm determining which Harry Potter house you belong to? Or how these survey results will be interpreted or used by those who read them?

My science background has trained me to approach all research with a dose of healthy skepticism. And as a high school science teacher, one of the objectives for all of my classes is to instill that same skeptic frame to my students and allow them to critically analyze multiple pieces of research. As a Public Health graduate student, I get to use those skills every day. While spending time with classmates on the Public Health in Action trip to the Rio Grande Valley (RGV), Texas (along the Mexican border) over spring break, I was able to take those skeptic skills and put them to use in practice.

We worked, with a research team at the University of Texas RGV, on the pilot of a lengthy mental health survey – CoPhII – that will be administered to a random digit dialing selection of residents in the 956 area code that spans four counties, Willacy, Hidalgo, Cameron, and Starr. My team and I spent many hours reviewing the survey and critically analyzing each question, the usefulness of the answers, how to deliver them, and of course the most important aspect of the survey – the introduction and hook! Part of the team worked to analyze the Spanish translation and gave great care to ensure the questions were translated into appropriate language for the area. Many of the items on the original document were exact translation or were too formal for the population we would be interacting with. The team was relentless in their attention to detail and highlighted many linguistic issues that unfortunately were bound by IRB protocol and were not resolved.

Upon completion of our analysis and review, we discussed the limitations of translation in the research study. The researchers we worked with provided us with a not surprising response, "It is not often that we work with a team so dedicated to translation adaptation for a specific community. Most of the issues you presented are with the standardized survey measures and unfortunately we cannot edit those and maintain the same level of validity. We offset this by having a large population to pull from."

This of course got my skeptic brain thinking. Especially in non-English speaking populations what are we doing to make sure our data is actually valid – giving surveys that make little sense to the participants even if there is a plethora of them? In the CoPhII case, was there a Spanish origin survey that could've been replaced with the English version instead of a translation? Is it appropriate to contact the original survey creator and ask for a revision? Is it necessary to run a study to determine the validity of every translated survey before it can be used?

There must be an easier, more realistic way for surveys to capture the accuracy of the data within a population. Obviously, no data can provide 100% accuracy and 100% validity. Understanding the limitations of each study is key to reading and interpreting data from any study. While teaching I would have my students break apart journal articles and describe the sponsors of the study, the methods the researchers used and what limitations each study has, to drive this idea home. Healthy skepticism is what keeps humanity thriving, it separates us from the heard.

Next time you are asked to engage in a survey participate and provide feedback, think about the intention of the questions and who is asking them, and remember your input is valuable!