Replication Data for: Addressing Measurement Errors in Ranking Questions for the Social Sciences (doi:10.7910/DVN/UCTXEF)

View:

Part 1: Document Description
Part 2: Study Description
Part 5: Other Study-Related Materials
Entire Codebook

Document Description

Citation

Title:

Replication Data for: Addressing Measurement Errors in Ranking Questions for the Social Sciences

Identification Number:

doi:10.7910/DVN/UCTXEF

Distributor:

Harvard Dataverse

Date of Distribution:

2024-10-30

Version:

1

Bibliographic Citation:

Kim, Seo-young Silvia; Atsusaka, Yuki, 2024, "Replication Data for: Addressing Measurement Errors in Ranking Questions for the Social Sciences", https://doi.org/10.7910/DVN/UCTXEF, Harvard Dataverse, V1

Study Description

Citation

Title:

Replication Data for: Addressing Measurement Errors in Ranking Questions for the Social Sciences

Identification Number:

doi:10.7910/DVN/UCTXEF

Authoring Entity:

Kim, Seo-young Silvia (Seoul National University)

Atsusaka, Yuki (University of Houston)

Producer:

<i>Political Analysis</i>

Distributor:

Harvard Dataverse

Access Authority:

Kim, Seo-young Silvia

Depositor:

Kim, Seo-young Silvia

Date of Deposit:

2024-09-15

Holdings Information:

https://doi.org/10.7910/DVN/UCTXEF

Study Scope

Keywords:

Social Sciences, survey, ranking, measurement error, bias correction, rank order

Abstract:

Social scientists often use ranking questions to study people's opinions and preferences. However, little is understood about the general nature of measurement errors in such questions, let alone their statistical consequences and what researchers can do about them. We introduce a statistical framework to improve ranking data analysis by addressing measurement errors in ranking questions. First, we characterize measurement errors from random responses---arbitrary and meaningless responses based on a wide range of random patterns. We then quantify bias due to random responses, show that the bias may change our conclusion in any direction, and clarify why item order randomization alone does not solve the statistical issue. Next, we introduce our methodology based on two key design-based considerations: item order randomization and the addition of an "anchor" ranking question with known correct answers. They allow researchers to (1) learn about the direction of the bias and (2) estimate the proportion of random responses, enabling our bias-corrected estimators. We illustrate our methods by studying the relative importance of people's partisan identity compared to their racial, gender, and religious identities in American politics. We find that about 30% of respondents offered random responses and that these responses may affect our substantive conclusions.

Methodology and Processing

Sources Statement

Data Access

Other Study Description Materials

Related Publications

Citation

Title:

Forthcoming, Political Analysis

Bibliographic Citation:

Forthcoming, Political Analysis

Other Study-Related Materials

Label:

ranking_error.zip

Text:

Zip file for the entire replication package

Notes:

application/zip