Interdisciplinary Seminar in Quantitative Methods (ISQM)

The goal of the Interdisciplinary Seminar in Quantitative Methods is to provide an interdisciplinary environment where researchers can present and discuss cutting-edge research in quantitative methodology. The talks will be aimed at a broad audience, with more emphasis on conceptual than technical issues. The research presented will be varied, ranging from new methodological developments to applied empirical papers that use methodology in an innovative way. We welcome speakers and audiences from all fields in the social, natural, and behavioral sciences.

Organizers: Matias Cattaneo and Rocio Titiunik
To be added to the email list, please contact us at:

Location: Eldersveld Room, 5670 Haven Hall

Time: Wednesdays, 4:00 - 5:30pm

Note: Please see event listing for particular changes in location or time.

Quantifying Complexity

September 10, 2014: Scott Page, Complex Systems, Political Science, and Economics, University of Michigan

How do Climate Models Compare with Reality Over the Tropics from 1958-2012? HAC-Robust Trend Comparisons Among Climate Series with Possible Intercept Shifts

September 24, 2014: Timothy J. Vogelsang, Economics, Michigan State University

Why Does the American National Election Study Overestimate Voter Turnout?

October 8, 2014: Simon Jackman, Political Science, Stanford University

Estimating the Impacts of Program Benefits: Using Instrumental Variables with Underreported and Imputed Data

October 22, 2014: Mel Stephens, Economics, University of Michigan

Using Experiments to Estimate Geographic Variation in Racially Polarized Voting

November 5, 2014: Kevin M. Quinn, School of Law, University of California at Berkeley

Mitigating the Usual Limitations of the basic Regression-Discontinuity Design: Theory and Three Empirical Demonstrations from Design Experiment

November 19, 2014: Thomas D. Cook, Sociology, Psychology, and Education and Social Policy, Northwestern University

Statisticians (Social Science) and Data Scientists (Machine Learners): Let’s Talk

December 3, 2014: Neal Beck, Department of Politics, New York University

Essential Ideas of Causal Inference in Experiments and in Observational Studies

February 11, 2015: Don Rubin, Statistics, Harvard University


There are several essential concepts for casual inference in randomized experiments and observational studies. These concepts were formulated only recently, in the twentieth century, and are important to keep in mind when trying to understand the casual effects of past interventions or new proposed interventions. Some historical connections will be emphasized, and the reasons for the inapposite focus on regression based methods for casual inference will be discussed.

Measuring Political Knowledge in the Mass Public: Calibrating a Useful Instrument

February 25, 2015: William G. Jacoby, Political Science, Michigan State University
Add to Your Google Calendar


Political knowledge is widely regarded as an important variable in research on public opinion and political behavior. However, there is little consensus among scholars regarding the best way to measure the knowledge possessed by individual citizens. One strategy relies upon interviewer assessments. Despite numerous advantages, there are potentially serious problems with this approach resulting from systematic differences in judgments across interviewers. Drawing from measurement theory, I propose a simple approach for taking these interviewer biases into account, thereby effectively calibrating the measurement instrument for political knowledge. This approach is tested using the knowledge battery and interviewer assessments from the 2004 ANES. The measurement calibration strategy produces "cleaner" measures of political knowledge, that exhibit theoretically reasonable relationships with other variables.

New Developments in Mediation Analysis

March 11, 2015: Tyler J. VanderWeele, Epidemiology and Biostatistics, Harvard School of Public Health
Add to Your Google Calendar


Methodology for assessing direct and indirect effects (i.e. mediation) has been used in the biomedical and social sciences for decades. More recently, theory and methods for mediation have been developed from the causal inference literature to extend traditional methods to more complex settings and to clarify the causal assumptions being made. The talk will (i) provide an overview of concepts, assumptions, and methods for causal mediation analysis, (ii) discuss sensitivity analysis methods to assess robustness of effect estimates to assumptions made, and (iii) present new results and methods on a 4-way decomposition that decomposes a total effect, in the presence of a mediator with which the exposure may interact, into four components: that due to just mediation, that due to just interaction, that due to both mediation and interaction, and that due to neither. The methodology will be illustrated by scientific and policy relevant examples from medicine, psychology, and genetics.

Further book-length discussion of this material can be found in: VanderWeele TJ. (2015). Explanation in Causal Inference: Methods for Mediation and Interaction. Oxford University Press.

Assessing and enhancing the generalizability of randomized trials to target populations

March 25, 2015: Elizabeth A. Stuart, Mental Health and Biostatistics, Johns Hopkins Bloomberg School of Public Health


With increasing attention being paid to the relevance of studies for real-world practice (such as in education, international development, and comparative effectiveness research), there is also growing interest in external validity and assessing whether the results seen in randomized trials would hold in target populations. While randomized trials yield unbiased estimates of the effects of interventions in the sample of individuals (or physician practices or hospitals) in the trial, they do not necessarily inform about what the effects would be in some other, potentially somewhat different, population. While there has been increasing discussion of this limitation of traditional trials, relatively little statistical work has been done developing methods to assess or enhance the external validity of randomized trial results. This talk will discuss design and analysis methods for assessing and increasing external validity, as well as general issues that need to be considered when thinking about external validity. The primary analysis approach discussed will be a reweighting approach that equates the sample and target population on a set of observed characteristics. Underlying assumptions, performance in simulations, and limitations will be discussed. Implications for how future studies should be designed in order to enhance the ability to assess generalizability will also be discussed.


April 8, 2015: Judea Pearl, Computer Science Department and Cognitive Systems Lab, UCLA


April 22, 2015: Dylan Small, Statistics, Wharton School, University of Pennsylvania

An empirical model of network formation: detecting homophily when agents are heterogeneous

May 6, 2015: Bryan S. Graham, Economics, University of California at Berkeley