Nineteen Sixty-four is a research blog for the Center for Applied Research in the Apostolate (CARA) at Georgetown University edited by Mark M. Gray. CARA is a non-profit research center that conducts social scientific studies about the Catholic Church. Founded in 1964, CARA has three major dimensions to its mission: to increase the Catholic Church's self understanding; to serve the applied research needs of Church decision-makers; and to advance scholarly research on religion, particularly Catholicism. Follow CARA on Twitter at: caracatholic.

11.26.2013

Don't Call, Will Tell


Did you get your call from Pope Francis yet? As you might have heard he is polling the world's Catholics about aspects of family life. The phone tree is quite a bit smaller than most in the media seem to be aware of. As is often the case ahead of a synod, Pope Francis has asked the Church's bishops to provide information about their diocese. Some bishops have attempted to survey lay Catholics to provide input for their responses (...in one case using SurveyMonkey, which in my opinion is like trying to make Thanksgiving dinner in an Easy Bake Oven. There are reasons survey researchers get graduate degrees. Sampling, weighting, question wording all really, really matter...). However, there is no systematic nor scientific effort in this regard and the questions from the Vatican are not really for the public in a personal and individual sense (...referencing Church documents and asking for information about the Catholic population's general awareness of and/or belief in aspects of the faith).

At the same time it is likely that many bishops will turn to polling data to answer some of the Vatican's questions such as cohabitation percentages, divorced and remarried percentages, and attitudes about same-sex civil unions and marriage. Many may look to existing national telephone polls from Gallup, Pew, or PRRI on these issues. However, there is emerging evidence that these may not provide the most precise view of public opinion and behavior for some matters.

A recent post examined the difference between self-reported Mass attendance in surveys when an interviewer is asking the question and when a respondent is filling out a survey on screen (i.e., self-administered). For certain questions, survey researchers know people are more honest when they are not interacting with a human being—things like giving to charity, voting, use of drugs, or marital infidelity. The presence of an interviewer (even just on the phone) increases the likelihood that respondents will cave to "social desirability" pressures and answer in a way that that they feel is socially acceptable, normal, and/or good. Pew recently acknowledged this creates a distortion in their Church attendance estimates (CARA's survey methods prevent much of this).

Does this matter for any of the information the Vatican is looking for? In recent years there has been a lot of attention given to changing attitudes about same-sex relationships in the United States. Since the early 1990s, American attitudes about many lesbian, gay, and bisexual (LGB) issues have shifted dramatically. So much so, that we know this is just not happening through generational replacement (i.e., older Americans passing away and being replaced by younger Americans with different attitudes). Some are changing their minds. Yet, research has also indicated that there may be a "Bradley effect" occurring with polling on same-sex issues (1, 2, 3). The Bradley effect is named after former Los Angeles Mayor Tom Bradley who was leading in the polls in the 1982 California Governor’s race but then lost the election. Some interpreted this result as being created by white voters who were unwilling to vote for a black candidate but who said they would do so when surveyed by interviewers. Some state referenda regarding same-sex marriage have done better in surveys than at the ballot box leading to the hypothesis that some people are saying they would vote for same-sex marriage but then doing the opposite on Election Day.

What is in a person's mind is not always reflected well in their answers to a survey interview. The 2012 American National Election Study (ANES) provides a rather unique opportunity to examine the possibility that there is a mismatch for some between mind and mouth when being surveyed about LGB issues. The ANES includes 2,056 face-to-face interviews and 3,860 surveys completed by respondents onscreen (online or through televisions via Knowledge Networks national panel). With a total of 5,916 interviews with voting eligible Americans we have relatively low margins of error to work with—even for sub-groups. In the figures below we show results of an analysis of the ANES data by religious affiliation. This includes three groups: Catholics, all other Christians, and those without a religious affiliation. There are not sufficient respondents with other non-Christian religious affiliations to examine this group separately.

Of course one other sub-group is key. How did the respondent take the survey? Were they interviewed or did they fill it out alone onscreen? The table below shows differences between these two samples, after the survey's weighting is applied. Those who took the self-administered survey are less likely than than those interviewed face-to-face to be independents (...this may be a related to the mode, with some feeling more socially comfortable saying they are independent in these deeply partisan times when speaking to an interviewer). Either way this does not lead to significant a partisan “imbalance” as the effect is similar for Democrats (+3 percentage points) and Republicans (+5 percentage points). The self-administered respondents are more likely to be moderates than ideologues in either direction—although again the difference is small. In sum, the self-administered sample does not appear to lean more “left” or “right” than the face-to-face sample. This is also reflected in the self-reports of their 2012 vote. There is no difference in religiosity between the samples. The face-to-face respondents are slightly more likely than the self-administered respondents to be Tea Party supporters (22% compared to 17%).


The ANES data allow one to test for a potential "Bradley-like effect" by comparing the responses of these two samples. The figure below shows only very small differences between these two groups of respondents. When asked about same-sex marriage, 44% of Catholics said they supported gay and lesbian couples being allowed to legally marry when interviewed face-to-face. Slightly fewer, 38% of Catholics, responded as such in self-administered surveys. Small differences are also apparent among other respondents. However, all of these results are within margins of error. Note the other two response options are support for civil unions or no legal recognition. Majorities of all three religious groups, regardless of mode, support legal marriage or civil unions for same-sex couples.


The next question was split-sample. One version of the question refers to "gays and lesbians" and the other to "homosexuals." Both focus on the same issue of legislation to prevent job discrimination. The differences here for Catholics and other Christians are beyond margin of error. Both groups are less likely to "strongly favor" laws to protect against job discrimination based on sexual orientation when an interviewer is not present. This result is consistent across both sub-samples/versions of the question. Note that when interviewed by a person, majorities of Catholics and other Christians appear to strongly support this legislation. However, half or fewer strongly support this when answering the question onscreen alone. Also note when "homosexuals" is used in the question wording a difference among the unaffiliated becomes evident as well.


A similar split-sample questions series explores attitudes about military service (i.e., the figure shows those "strongly favoring" the ability to serve). Here, opinion is more positive but also varies slightly by survey mode for Catholics when "gays and lesbians" is used. When this is replaced by "homosexuals" differences are apparent among all three sub-groups.


Turning to adoption, a slight difference among Catholics by mode is again apparent. Here the measurement is dichotomous—favoring legal adoption or not. The figure shows percentage saying "yes" they favor legal adoption.


Finally, the ANES employs a number of "feeling thermometers." These gauge how "warmly" or "coolly" people feel toward individuals, groups, and institutions. A score of 100 is very favorable (warm) and 0 is very unfavorable (cold). A score of 50 indicates a lack of any particular feeling (neither warm nor cold). In 2012, the ANES asked about respondents' feelings toward "gay men and lesbians." As shown in the figure below, the average thermometer scores are slightly lower when no interviewer is present. However, the differences are so small they are within margin of error.


Potential social desirability pressures are more evident in the percentages of respondents who gave scores below 50 (unfavorable, cold). More respondents in self-administered surveys than in face-to-face interviews were likely to do this across all three religious sub-groups.


The ANES also asks respondents about the number of LGB individuals among their families, neighbors, co-workers, and close friends. Significantly fewer in the self-administered surveys indicate they have a LGB family member, co-worker, neighbor, or close friend. Perhaps this explains all that has been shown above? I don’t think so. Instead, I think this is again social desirability pressures. In a face-to-face interview, saying you don’t know anyone in the LGB community could appear to some as an indicator of discrimination and avoidance.


Finally, the ANES asks respondents to define their own sexual orientation. Those with no religious affiliation, regardless of survey mode, are most likely to self-identify as homosexual, gay, or bisexual (LGB Americans are more likely than others to leave their childhood religion for no affiliation). No differences by mode are apparent.


Across all questions the differences are generally small. Any social desirability effect, if real, is weak. However, the differences are consistent across questions and consistent with existing research. Similar differences are not apparent between the samples for other questions about abortion, climate change, or affirmative action. More research is needed before making any substantial conclusions on this topic (e.g., a replication of the ANES 2012 model in future election years). The ANES also uses face-to-face interviews and the social distance of telephone interviews may reduce social desirability pressures a bit. As a single-snap shot, without replication, other explanations for the differences noted above are certainly possible.

I know some might read this post and then be tempted to run with the line, "CARA says the polls are wrong, Catholics don't support/oppose _____________." That would be a mistake. That is not what this post is saying. Instead this post shows evidence that polls requiring a human interviewer may slightly overestimate support for LGB issues among Americans—Catholic or not. That does not mean these polls are "wrong." The changes in American attitudes in the last two decades are real and in a broader context the potential social desirability effects examined here confirm this. The reason over-reports may be occurring is because respondents believe that the broader culture they live in has shifted and they feel pressure to conform to this in a social context—even when their opinions may differ. This is just as much an indicator of change as the broader trends in the polling data. At the same time, if one wants to know as exactly as possible what a group’s opinion is on same-sex issues, CARA would recommend getting rid of the interviewer.

Phone image courtesy of MoShotz.

Search This Blog

Blog Archive

© 2009-2024 CARA, Mark M. Gray. Background image courtesy of muohace_dc.