Replication Data for: Implementation Matters: Evaluating the Proportional Hazard Test’s Performance (doi:10.7910/DVN/D56UWV)

View:

Part 1: Document Description
Part 2: Study Description
Part 5: Other Study-Related Materials
Entire Codebook

Document Description

Citation

Title:

Replication Data for: Implementation Matters: Evaluating the Proportional Hazard Test’s Performance

Identification Number:

doi:10.7910/DVN/D56UWV

Distributor:

Harvard Dataverse

Date of Distribution:

2023-09-22

Version:

1

Bibliographic Citation:

Metzger, Shawna, 2023, "Replication Data for: Implementation Matters: Evaluating the Proportional Hazard Test’s Performance", https://doi.org/10.7910/DVN/D56UWV, Harvard Dataverse, V1

Study Description

Citation

Title:

Replication Data for: Implementation Matters: Evaluating the Proportional Hazard Test’s Performance

Identification Number:

doi:10.7910/DVN/D56UWV

Authoring Entity:

Metzger, Shawna (University at Buffalo, The State University of New York)

Distributor:

Harvard Dataverse

Access Authority:

Metzger, Shawna

Depositor:

Code Ocean

Holdings Information:

https://doi.org/10.7910/DVN/D56UWV

Study Scope

Keywords:

Social Sciences

Abstract:

Replication material for Metzger's "Implementation Matters" (forthcoming, Political Analysis). See "readme.html" in /code folder for further documentation. Abstract: Political scientists commonly use Grambsch and Therneau’s (1994, Biometrika) ubiquitous Schoenfeld-based test to diagnose proportional hazard violations in Cox duration models. However, some statistical packages have changed how they implement the test’s calculation. The traditional implementation makes a simplifying assumption about the test’s variance-covariance matrix, while the newer implementation does not. Recent work suggests the test’s performance differs, depending on its implementation. I use Monte Carlo simulations to more thoroughly investigate whether the test’s implementation affects its performance. Surprisingly, I find the newer implementation performs very poorly with correlated covariates, with a false positive rate far above 5%. By contrast, the traditional implementation has no such issues in the same situations. This shocking finding raises new, complex questions for researchers moving forward. It appears to suggest, for now, researchers should favor the traditional implementation in situations where its simplifying assumption is likely met, but researchers must also be mindful that this implementation’s false positive rate can be high in misspecified models.

Methodology and Processing

Sources Statement

Data Access

Other Study Description Materials

Other Study-Related Materials

Label:

capsule-7182194.zip

Text:

Notes:

application/zip

Other Study-Related Materials

Label:

result-6357ddba-ed88-4936-b418-9b1f6ea76e4b.zip

Text:

Notes:

application/zip