Interviewer Effects in the Arab World: Evidence from Qatar

ICYMI (In Case You Missed It), the following work was presented at the 2015 Regional Conference of the World Association of Public Opinion Research (WAPOR) in Doha, Qatar.  Justin Gengler, Research Projects Manager in the Policy Unit team at Qatar University’s Social and Economic Survey Research Institute (SESRI), presented “Interviewer Effects in the Arab World: Evidence from Qatar” as part of the session “Data Collection” on Monday, March 9th, 2015.

Post developed by Justin Gengler.

My paper, which examines the issue of interviewer effects in surveys conducted in Qatar, is the first output from a long-term research project conceived shortly after my arrival at SESRI in late 2011. Having then recently finished a mass political survey of Bahrain in which interviewer characteristics played an influential role in shaping survey responses across an array of substantive topics, I was surprised to discover that not all of SESRI’s interviewers at that time were non-nationals. However, this may not seem surprising when one considers the economic and social disincentives for nationals to work as field interviewers.

To test whether what I observed in Bahrain would also hold true for Qatari respondents, my collaborators and I designed a survey experiment that enabled us to isolate and measure the influence of interviewers being either Qatari or non-Qatari in nationality. In order to do so, we had to recruit Qatari nationals to conduct the interviews, and for this we turned to Qatar University students.

We were able to recruit approximately 45 female student interviewers (as we wanted to remove the impact of interviewer gender, males were excluded). We were pleased to be able to recruit 31 interviewers who were of Qatari nationality, as well as 14 non-national interviewers who spoke Egyptian, Levantine, and North African Arabic dialects. All 45 interviewers underwent standard field interviewer training together, and the students were enthusiastic about the project.

Survey respondents were randomly assigned to either the Qatari or non-Qatari interviewer group, and this group assignment was preserved even in the case of callback, dropped calls, or other eventuality. During a few weeks of interviewing in June 2014, we reached a total of 1,587 respondents, 1,288 of whom completed the entire survey. Of these, 832 of the surveys (53%) were carried out by Qatari interviewers.

As expected, we found that Qatari respondent answers differed based on whether they were being interviewed by a Qatari. But in addition, we also found that Qatari interviewers tended to recruit and retain different types of Qatari respondents than the non-Qatari interviewers. Qatari interviewers recruited on average more males, and respondents who were slightly older and had lower levels of education.

Even after accounting for the different types of Qatari respondents interviewed by Qataris and non-Qataris, interviewer nationality still exerts an important influence over Qataris’ responses. Two topics in particular seem especially sensitive to interviewer nationality:

1. On self-assessments of respondents’ economic/financial situations, Qatari respondents tend to give more positive economic self-assessments when asked by Qatari interviewers.

Gengler12. On attitudes toward foreigners and policies related to immigration and naturalization, Qatari respondents tend to give answers to Qatari interviewers that are more negative than the answers given to non-Qatari interviewers.

We did find that answers on some topics did not differ between Qatari and non-Qatari interviewers. In particular, we found that answers to items about political interest and attitudes and voting behavior —items that are likely to evoke sensitivities but do not otherwise carry local social or economic connotations— were not impacted by the nationality of the interviewer. This suggests that interviewer effects in Qatar do not stem from a greater trust in or comfort level with Qatari interviewers per se, but rather from an in-group/out-group dynamic rooted in the distinctive demographic character of Qatar and similarly-configured Arab Gulf states.

Can students accurately answer about the education level of their parents?

ICYMI (In Case You Missed It), the following work was presented at the 2015 Regional Conference of the World Association of Public Opinion Research (WAPOR) in Doha, Qatar.  Linda Kimmel, a researcher at the Institute for Social Research’s Center for Political Studies presented “ Factors influencing the accuracy of student reports of family background: Evidence from the 2012 Qatar Education Study” as part of the session “Questionnaire Design” on Sunday, March 8th, 2015.

Post developed by Linda Kimmel.

Education scholars have long assumed family background to be especially salient in predicting student achievement and student educational aspirations.

Student reporting on family household characteristics is a common practice when conducting school surveys, in order to save on the cost, time, and resources of also interviewing the parents. But this savings comes with a potential quality tradeoff, as oftentimes students misreport this information, especially parental education.

Our paper, with co-authors Brian Hunscher, Jill Wittrock and Kien Trung Le, examines the determinants of misreporting of parent education using survey data from the 2012 Qatar Education Study (QES). Conducted by Qatar University’s Social and Economic Survey Research Institute (SESRI), the study surveys students, parents, teachers, and administrators about their views toward K-12 education in Qatar.

In prior research, age and gender of the student, family structure, closeness to parents, race, student academic achievement, and parent highest level of education have all have been found to be related to the accuracy of the student reports. Our paper adds to this literature by examining two new factors: immigration status and school type. Parent education was included on both the student and parent surveys in the QES, thus allowing us to determine the underlying factors leading to a higher probability of misreporting.

Overall we found relatively high levels of agreement between students and parents about parent education. Students were about as accurate reporting on education for either parent, and while strict agreement indicates that only about 60% of student’s proxy and parent’s self-reports match, this improves by 10% if we use a slightly more forgiving collapsed operationalization of education.

Similar to previous research, student age, whether the father lives at home, and high performing students all contributed to differences in student reports of parental education. We did not find support for a gender effect, contrary to prior findings, and the size of the household and parental involvement with homework did not have an impact.

The influence of school system was significantly different for mother’s education but not for father’s. Arabic Private and Community school students seemed to have stronger agreement on mother’s education than students from Independent and International schools. Students in Community schools, in particular, were far more accurate reporters than children in other school types.

No significant differences in student reports were found between Qatari and non-Qatari students in terms of mother’s education, but difference in father’s education was statistically significant.

Conclusion. If students are able to place their parents into a limited number of education categories with a high level of accuracy, then perhaps student reporting is an acceptable tradeoff in comparison to the additional cost and burden of fielding an additional parent survey. Conversely, if an educational researcher is interested in the marginal effect of more detailed parental education levels, then student reporting may not be acceptable. Knowledge gained from this study and previous work should be of great assistance to researchers in helping decide under which conditions student reports can serve as valid substitutes for parental self-reports.

Question design matters: the impact of wording on gender attitudes

ICYMI (In Case You Missed It), the following work was presented at the 2015 Regional Conference of the World Association of Public Opinion Research (WAPOR) in Doha, Qatar.  Fatimah Al-Khaldi, a researcher on the Policy Unit team at Qatar University’s Social and Economic Survey Research Institute (SESRI), presented “Survey Experiment on Attitudes & Perceptions of Women in the Political Sphere” as part of the session “Questionnaire Design” on Sunday, March 8th, 2015.

Post developed by Fatimah Al-Khaldi.

To what extent are survey results relating to gender attitudes accurate and reliable? Often survey questions are worded in ways that are more positive for a man than a woman. For instance, consider the following items which appear in questionnaires from the World Values Survey:

“when jobs are scarce, men should have more right to a job than women

“on the whole, men make better political leaders than women do”

In the questionnaires, statements that are in contradiction with gender stereotypes are almost entirely absent.

To investigate the impact of this question wording, I conducted a survey experiment using Amazon’s Mechanical Turk. Survey respondents, comprised of a sample of adults living in the United States, were divided into two samples. Respondents in each sample were asked to what extent they agreed or disagreed with a set of statements.

Sample 1 received statements worded to emphasize gender stereotypes:

AlKhaldi1

With the exception of the fourth statement, the majority of Sample 1 respondents strongly disagreed with the statements. In all statements, women were more likely than men to strongly disagree, and respondents who identified themselves as moderate or liberal respondents were more likely than self-identified conservative or very conservative respondents to strongly disagree.

Sample 2 received statements worded in contradiction of accepted stereotypes:

For each of the five statements, approximately half (or nearly so) of Sample 2 respondents answered neutrally (“indifferent”), with women being more likely than men to express neutrality. Men were more likely than women to strongly disagree with the first statement. Respondents who identified themselves as very conservative were more likely to strongly disagree on the first and fifth statements.

The results of the experiment show a clear difference in response distributions across the two samples, confirming that the wording for questions on gender attitudes does indeed matter.

Methodological Statement. The survey experiment was conducted in English on the Internet via Amazon’s Mechanical Turk from March 8-16, 2014, using a sample of 328 adults who were 18 years of age or older and living in the United States (citizens and residents). The work was part of a class project at Johns Hopkins University.

(Edit, applied March 9, 2015: Tables were previously in incorrect order.)