Oslo Summer School in Comparative Social Science Studies 2008

Research Design

Lecturer: Professor Barbara Geddes,
Department of Political Science,

Main disciplines: Political Science

Dates: 21 - 25 July 2010
Course Credits: 10 pts (ECTS)
Limitation: 30 participants

This course is designed to help students design good, theoretically informed empirical research projects and write effective funding proposals. It addresses issues relevant to both qualitative and quantitative research.

The purpose of empirical social science research is to build theories that help us to understand the world. Good research is both theoretically interesting and persuasive. Persuasiveness depends on whether the evidence shown in the study convinces readers that the author’s arguments and interpretations are correct. This course has two goals: to help students choose theoretically interesting and researchable dissertation and paper topics; and to increase students' general sophistication in designing research strategies that will make their research findings persuasive.

The first session of the class will be spent making sure that everyone has at least the beginning of a research proposal idea. In later sessions, we will discuss transforming vague topics and inchoate ideas into clear arguments from which testable hypotheses can be drawn; linking current events and other specific outcomes we may want to explain to appropriate theoretical ideas; and non-quantitative methodological issues that determine whether one's research is ultimately persuasive.

This course encompasses both qualitative and quantitative research strategies, and many comparative dissertations now use both. Topics of special relevance to qualitative research include: how to make the most of small-N case selection; how to test path dependent arguments; how to test arguments about necessary causes; and how to use case studies as sources of evidence with which to test arguments. Topics of special interest to quantitative researchers include: learning to be creative about testing the implications of arguments instead of producing kitchen sink regressions; the careful operationalization of important qualitative concepts; and creating data from qualitative sources.

The (reasonably short) readings on these subjects will be discussed in the context of the research ideas proposed by members of the class. Students are expected to do the assigned reading, some of which is boring, and to think and talk about how the issues raised in it might be relevant to their own research projects.

Some short assignments will be done in class. The final paper for the class will be a dissertation prospectus or research funding proposal. Students should consult with me individually about their topics. They will be given a tinplate and instructions for how to write the proposal. Each student should finish the class with a usable proposal and a reasonable idea of what to do next in the research process.

The course is designed primarily to meet the needs of students who are beginning to think seriously about research, though more advanced students are welcome.



Lecture 1: Intellectual introductions; Choosing a Research Topic

During this session, students will be asked to describe their proposed research topics. We will discuss what makes a “good” topic and the roles of both passion and methodical work in good research.


Articles marked with * are found in the course compendium. Other articles are available on the internet via JSTOR

Lecture 2: Explaining Outcomes vs. Testing Arguments

Interest in a research topic usually begins with wanting to explain some particular outcome. The naïve approach to explaining outcomes is to list all possible contributors to the outcome and then, if we are quantitatively inclined, to throw them into a kitchen sink regression. This may be an appropriate strategy if we aim to predict the outcome of a fairly well understood process, but it is not the best strategy for building an understanding of a process we do not already understand. To do that, we need to focus on the moving parts of the mechanisms that lead to the outcome, we need to theorize how they work one by one, and then we need to devise observable implications of these tentative theories that can be tested.

Short in-class assignment.



Lecture 3: Small N Issues

In this session, we discuss the basics of the small-N comparative case-study method. We also consider its limitations.

Assignments returned and discussed.


Lecture 4: Selection Bias and Case Selection

The lecture begins with a simple demonstration of why cases should not be selected on the basis of a particular outcome. We discuss how to avoid selection bias in both small and large-N research. We also discuss selection by “nature” and how to devise research strategies to compensate for it.



Lecture 5: Rival Hypotheses and Crucial Tests

Here we discuss the relationship between “the literature” and your own argument. In order to do persuasive research, you must test your own arguments against rival arguments drawn from prior research. A good research design includes “crucial tests” that demonstrate both that your argument is consistent with evidence and that rival arguments are not. In quantitative research, it is usually possible to include operationalizations of rival arguments along with your own in the same statistical model. In qualitative research, however, we must use thoughtful case selection and, often, multiple different tests to accomplish the same thing.

Short in-class assignment.



Lecture 6: The Logic of Quasi-Experimental Research Design

This lecture shows the simple structure of several common research designs, making clear the strengths and limitations of each. Its purpose is to familiarize students with different research design options and to help them choose ones that are appropriate and feasible for their own topics.



Lecture 7: Comparative Historical Research and Path Dependence

This lecture begins with a careful definition of path dependence. We then discuss the various causal processes that can lead to path dependence. Finally, we consider ways of testing arguments about path dependent causal processes.


Lecture 8: Operationalizing and "Measuring" Causal Factors

Both qualitative and quantitative research require “measurement,” but they tend to face different kinds of measurement problems. In quantitative research, the most serious problem is often finding or devising operationalizations of concepts that really capture their meaning. In qualitative research, one of the most serious problems is figuring out concrete criteria for assigning cases to non-quantitative categories such as democratic or authoritarian. This lecture discusses strategies for dealing with the operationalization of abstract concepts and non-quantitative “measurement.”

Short in class assignment.



Lecture 9: Testing Arguments That Posit Necessary Conditions

Testing arguments about necessary conditions requires different case selection criteria than does testing probabilistic arguments. In this lecture we discuss rigorous methods for testing arguments about necessary causes.

Instructions for the final paper will be passed out and discussed.



Lecture 10: Deciding What Approach Fits Your Topic: Rational Choice and Its Critics

Any good argument or theory needs to identify the actors that cause the action under study and describe why they act as they do. For topics in which the assumptions about human decision making that underlie the rational choice approach are not too implausible, rational choice offers a well understood tinplate for thinking through the logic of the particular argument. For some other topics (e.g., the individual formation of attitudes and values), previous research has created other standard explanations for why actors act as they do. In still others, the appropriateness of different approaches is contested. Approaches are not religions, to be embraced for life. Instead, the researcher should choose an approach that is appropriate to a particular topic, which depends on what assumptions about the relevant behavior seem plausible and on which aspects of a causal process the researcher wishes to focus.



Complete List of Readings for Research Design



The Lecturer
Barbara Geddes, who earned her Ph.D. from UC, Berkeley in 1986, has written about politics and breakdown in authoritarian regimes, bureaucratic reform and corruption, political bargaining over institutional choice and change, and research design. Her publications include Paradigms and Sand Castles: Theory Building and Research Design in Comparative Politics (2003), Politician’s Dilemma: Building State Capacity in Latin America (1994), “What Causes Democratization?” in The Oxford Handbook of Comparative Politics (2007), and a number of other articles. Her current research focuses on the effect of authoritarian interludes on the democratic party systems that emerge after transitions. She teaches Latin American politics, authoritarian politics, and research design at UCLA.

Main page - Current Oslo Summer School Program