Interdisciplinary Seminar in Quantitative Methods (ISQM)

The goal of the Interdisciplinary Seminar in Quantitative Methods is to provide an interdisciplinary environment where researchers can present and discuss cutting-edge research in quantitative methodology. The talks will be aimed at a broad audience, with more emphasis on conceptual than technical issues. The research presented will be varied, ranging from new methodological developments to applied empirical papers that use methodology in an innovative way. We welcome speakers and audiences from all fields in the social, natural, and behavioral sciences.

Organizers: Matias Cattaneo and Rocio Titiunik
To be added to the email list, please contact us at:

Location: Eldersveld Room, 5670 Haven Hall

Time: Wednesdays, 4:00 - 5:30pm

Note: Please see event listing for particular changes in location or time.

Quantifying Complexity

September 10, 2014: Scott Page, Complex Systems, Political Science, and Economics, University of Michigan
Add to your Google Calendar

How do Climate Models Compare with Reality Over the Tropics from 1958-2012? HAC-Robust Trend Comparisons Among Climate Series with Possible Intercept Shifts

September 24, 2014: Timothy J. Vogelsang, Economics, Michigan State University
Note time change: Seminar will take place at 1pm
Add to your Google Calendar

Why Does the American National Election Study Overestimate Voter Turnout?

October 8, 2014: Simon Jackman, Political Science, Stanford University
Add to your Google Calendar

Estimating the Impacts of Program Benefits: Using Instrumental Variables with Underreported and Imputed Data

October 22, 2014: Mel Stephens, Economics, University of Michigan

Using Experiments to Estimate Geographic Variation in Racially Polarized Voting

November 5, 2014: Kevin M. Quinn, School of Law, University of California at Berkeley
Note location change: Seminar will take place in 201 Lorch Hall

Mitigating the Usual Limitations of the basic Regression-Discontinuity Design: Theory and Three Empirical Demonstrations from Design Experiment

November 19, 2014: Thomas D. Cook, Sociology, Psychology, and Education and Social Policy, Northwestern University
Note location change: Seminar will take place in 201 Lorch Hall

Statisticians (Social Science) and Data Scientists (Machine Learners): Let’s Talk

December 3, 2014: Neal Beck, Department of Politics, New York University

Essential Ideas of Causal Inference in Experiments and in Observational Studies

February 11, 2015: Don Rubin, Statistics, Harvard University
Add to your Google Calendar


There are several essential concepts for casual inference in randomized experiments and observational studies. These concepts were formulated only recently, in the twentieth century, and are important to keep in mind when trying to understand the casual effects of past interventions or new proposed interventions. Some historical connections will be emphasized, and the reasons for the inapposite focus on regression based methods for casual inference will be discussed.


February 25, 2015: William G. Jacoby, Political Science, Michigan State University


March 11, 2015: Tyler J. VanderWeele, Epidemiology and Biostatistics, Harvard School of Public Health

Assessing and enhancing the generalizability of randomized trials to target populations

March 25, 2015: Elizabeth A. Stuart, Mental Health and Biostatistics, Johns Hopkins Bloomberg School of Public Health


With increasing attention being paid to the relevance of studies for real-world practice (such as in education, international development, and comparative effectiveness research), there is also growing interest in external validity and assessing whether the results seen in randomized trials would hold in target populations. While randomized trials yield unbiased estimates of the effects of interventions in the sample of individuals (or physician practices or hospitals) in the trial, they do not necessarily inform about what the effects would be in some other, potentially somewhat different, population. While there has been increasing discussion of this limitation of traditional trials, relatively little statistical work has been done developing methods to assess or enhance the external validity of randomized trial results. This talk will discuss design and analysis methods for assessing and increasing external validity, as well as general issues that need to be considered when thinking about external validity. The primary analysis approach discussed will be a reweighting approach that equates the sample and target population on a set of observed characteristics. Underlying assumptions, performance in simulations, and limitations will be discussed. Implications for how future studies should be designed in order to enhance the ability to assess generalizability will also be discussed.


April 8, 2015: Judea Pearl, Computer Science Department and Cognitive Systems Lab, UCLA


April 22, 2015: Dylan Small, Statistics, Wharton School, University of Pennsylvania

An empirical model of network formation: detecting homophily when agents are heterogeneous

May 6, 2015: Bryan S. Graham, Economics, University of California at Berkeley