Survey researchers have dedicated increasing attention toward understanding how social networks influence individual-level outcomes like attitudes, political participation, and other forms of collective action. This work implicitly assumes that errors in individuals’ self-reported attitudes and behaviors are unrelated to the composition of their social networks. We evaluate this assumption, developing a theory explaining how social networks influence the survey response by shaping the social desirability of various behaviors and attitudes. We apply our theory to the study of political participation, examining evidence from three observational datasets and an experiment conducted on a national sample. We demonstrate that non-voting respondents’ tendency to falsely report having voted is driven by political participation levels among their close friends and family. We show that this tendency can artificially inflate estimates of social influence. This study therefore suggests that survey researchers must account for social influence on the survey response to avoid biasing their conclusions.
Many academic surveys administered online include a banner along the top of the survey displaying the name or logo of the researcher’s university. Our study aims to determine whether these banners influence survey respondents’ answers, that is, whether they induce sponsorship effects. For this purpose, we field three different studies on Amazon’s MTurk where we randomly assign the sponsoring institution. Our outcome measures include survey questions about social conservatism, religious practices, group affect, and political knowledge. We find that respondents provide similar answers and exhibit similar levels of effort regardless of the apparent sponsor.
The concept of interpersonal political disagreement remains central to work on deliberation in mass publics, and to the broader study of social context. Indeed, the extent to which individuals are exposed to challenging information, perspectives, and norms in their everyday lives is widely considered to play a fundamental role in democratic functioning. Using name generators embedded in surveys, some scholarship has emphasized the mostly agreeable nature of Americans’ core social networks. Building on these techniques, we reconsider these—perhaps incomplete—portraits of disagreement by: 1) replicating standard political name generator prompts, and 2) randomly assigning respondents to additional ones that explicitly ask them to name individuals with whom they disagree. The manipulations on these items vary the depth of disagreement, as well as its subject-matter and experience. Our study advances debates over the conceptualization and operationalization of disagreement, and is particularly timely given contemporary narratives concerning division and affective polarization.
Political surveys often include multi-item scales to measure individual predispositions such as authoritarianism, egalitarianism, or racial resentment. Scholars typically use these scales to examine how these predispositions vary across different subgroups, comparing women to men, rich to poor, or Republican to Democratic voters. Such research implicitly assumes that, say, Republican and Democratic voters’ responses to the egalitarianism scale measure the same construct in the same metric. Unfortunately, this research rarely evaluates whether this assumption holds. We present a framework to test this assumption and correct scales when it fails to hold. We apply this framework to 13 commonly used scales on the 2012 and 2016 ANES. We find widespread violations of the equivalence assumption and demonstrate that these violations often lead to biased conclusions about the magnitude or direction of theoretically-important group differences. These results suggest that researchers should not rely on multi-item scales without first establishing measurement equivalence.
At least since Key (1949), scholars have been interested in how voters’ geographic proximity to candidates predicts their support for these candidates. This relationship has largely been studied in state elections relying on aggregate voting data. As a consequence, we know little about the reasons why geographic proximity predicts support, nor do we know whether this pattern occurs in local elections. We address these issues using a unique dataset that identifies the residential locations of all voters and candidates running in seven local elections. The data also reveal the candidate choices of every voter, their personal attributes such as ethnicity and wealth, and their social affiliations including their occupations, churches, and families. These data allow us to examine how citizens’ geographic locations interweave with their social networks, their interests, their personal attributes, and ultimately their voting behavior.
Can political discussion help individuals improve their political decisions? Formal deliberation often helps citizens overcome their political ignorance, but recent work suggests informal, everyday discussion often fails to promote these benefits. I argue that recent informal discussion research has investigated contexts in which discussants hold a narrow range of motivations, which differ from those held in many real-world conversations. I develop and test a theory explaining how motivations influence the efficacy of political discussion. The analysis examines a small-group experiment, which randomly assigns incentives to alter subjects’ (1) strength of partisan predispositions toward two computer-generated candidates, (2) motivations to form accurate judgments about these candidates, and (3) motivations to provide accurate information to fellow subjects. The results suggest that previous informal discussion research generalizes well to individuals holding elevated partisan motivations, but underestimates discussion’s civic capacity for individuals holding elevated accuracy and, especially, prosocial motivations.