Oslo Summer School in Comparative Social Science Studies 2008
Lecturer: Professor Barbara Geddes,
Department of Political Science,
Main disciplines: Political Science
Dates: 21 - 25 July 2010
Course Credits: 10 pts (ECTS)
Limitation: 30 participants
This course is designed to help students design good, theoretically informed empirical research projects and write effective funding proposals. It addresses issues relevant to both qualitative and quantitative research.
The purpose of empirical social science research is to build theories that help us to understand the world. Good research is both theoretically interesting and persuasive. Persuasiveness depends on whether the evidence shown in the study convinces readers that the author’s arguments and interpretations are correct. This course has two goals: to help students choose theoretically interesting and researchable dissertation and paper topics; and to increase students' general sophistication in designing research strategies that will make their research findings persuasive.
The first session of the class will be spent making sure that everyone has at least the beginning of a research proposal idea. In later sessions, we will discuss transforming vague topics and inchoate ideas into clear arguments from which testable hypotheses can be drawn; linking current events and other specific outcomes we may want to explain to appropriate theoretical ideas; and non-quantitative methodological issues that determine whether one's research is ultimately persuasive.
This course encompasses both qualitative and quantitative research strategies, and many comparative dissertations now use both. Topics of special relevance to qualitative research include: how to make the most of small-N case selection; how to test path dependent arguments; how to test arguments about necessary causes; and how to use case studies as sources of evidence with which to test arguments. Topics of special interest to quantitative researchers include: learning to be creative about testing the implications of arguments instead of producing kitchen sink regressions; the careful operationalization of important qualitative concepts; and creating data from qualitative sources.
The (reasonably short) readings on these subjects will be discussed in the context of the research ideas proposed by members of the class. Students are expected to do the assigned reading, some of which is boring, and to think and talk about how the issues raised in it might be relevant to their own research projects.
Some short assignments will be done in class. The final paper for the class will be a dissertation prospectus or research funding proposal. Students should consult with me individually about their topics. They will be given a tinplate and instructions for how to write the proposal. Each student should finish the class with a usable proposal and a reasonable idea of what to do next in the research process.
The course is designed primarily to meet the needs of students who are beginning to think seriously about research, though more advanced students are welcome.
Lecture 1: Intellectual introductions; Choosing a Research Topic
During this session, students will be asked to describe their proposed research topics. We will discuss what makes a “good” topic and the roles of both passion and methodical work in good research.
Articles marked with * are found in the course compendium. Other articles are available on the internet via JSTOR
- King, Gary, Robert Keohane, and Sidney Verba (1994) Designing Social Inquiry: Scientific Inference in Qualitative Research, Princeton: Princeton University Press, pp. 1-23, 29-33, 46-50, and 100-114;
- Geddes, Barbara (2003), Paradigms and Sand Castles: Theory Building and Research Design in Comparative Politics, Ann Arbor: University of Michigan Press, pp. 27-37
Lecture 2: Explaining Outcomes vs. Testing Arguments
Interest in a research topic usually begins with wanting to explain some particular outcome. The naïve approach to explaining outcomes is to list all possible contributors to the outcome and then, if we are quantitatively inclined, to throw them into a kitchen sink regression. This may be an appropriate strategy if we aim to predict the outcome of a fairly well understood process, but it is not the best strategy for building an understanding of a process we do not already understand. To do that, we need to focus on the moving parts of the mechanisms that lead to the outcome, we need to theorize how they work one by one, and then we need to devise observable implications of these tentative theories that can be tested.
Short in-class assignment.
- Geddes, Paradigms and Sand Castles, chap. 1, pp. 1-26, and the rest of chap. 2, pp. 37-88
Lecture 3: Small N Issues
In this session, we discuss the basics of the small-N comparative case-study method. We also consider its limitations.
Assignments returned and discussed.
- *Lijphart, Arend (1975), "The Comparable Cases Strategy in Comparative Research," Comparative Political Studies 8 (July), pp. 158-77
- Lieberson, Stanley (1991), "Small N's and Big Conclusions: An Examination of the Reasoning in Comparative Studies Based on a Small Number of Cases,” Social Forces 7 (December), pp. 307-20
- King, Keohane, and Verba, pp. 208-228
Lecture 4: Selection Bias and Case Selection
The lecture begins with a simple demonstration of why cases should not be selected on the basis of a particular outcome. We discuss how to avoid selection bias in both small and large-N research. We also discuss selection by “nature” and how to devise research strategies to compensate for it.
- Achen, Christopher and Duncan Snidal (1989), "Rational Deterrence Theory and Comparative Case Studies," World Politics 41 (January), pp. 143-69
- Geddes, Paradigms and Sand Castles, pp. 89-114
- King, Keohane and Verba, pp. 128-48
Lecture 5: Rival Hypotheses and Crucial Tests
Here we discuss the relationship between “the literature” and your own argument. In order to do persuasive research, you must test your own arguments against rival arguments drawn from prior research. A good research design includes “crucial tests” that demonstrate both that your argument is consistent with evidence and that rival arguments are not. In quantitative research, it is usually possible to include operationalizations of rival arguments along with your own in the same statistical model. In qualitative research, however, we must use thoughtful case selection and, often, multiple different tests to accomplish the same thing.
Short in-class assignment.
- Chamberlin, T. C. (1966), "The Method of Multiple Working Hypotheses," Science 148, pp. 754-59
- *Platt, John (1964), "Strong Inference," Science 146, pp. 347-53
- *Van Evera, Stephen (1997), Guide to Methods for Students of Political Science, Ithaca: Cornell University Press, pp. 7-48
Lecture 6: The Logic of Quasi-Experimental Research Design
This lecture shows the simple structure of several common research designs, making clear the strengths and limitations of each. Its purpose is to familiarize students with different research design options and to help them choose ones that are appropriate and feasible for their own topics.
- *Campbell, Donald and Julian Stanley (1966), Experimental and Quasi-Experimental Designs for Research, Chicago: Rand McNally, pp. 1-22, 34-43, 47-50, and 55-60
- Geddes, Paradigms and Sand Castles, pp. 117-129
Lecture 7: Comparative Historical Research and Path Dependence
This lecture begins with a careful definition of path dependence. We then discuss the various causal processes that can lead to path dependence. Finally, we consider ways of testing arguments about path dependent causal processes.
- *Lieberman, Evan (2001), “Causal Inference in Historical Institutional Analysis: A Specification of Periodization Strategies,” Comparative Political Studies 34, pp. 1011-35
- *Alexander, Gerard (2001), “Institutions, Path Dependence, and Democratic Consolidation,” Journal of Theoretical Politics 13, pp. 249-70
- Geddes, Paradigms and Sand Castles, pp. 131-42
Lecture 8: Operationalizing and "Measuring" Causal Factors
Both qualitative and quantitative research require “measurement,” but they tend to face different kinds of measurement problems. In quantitative research, the most serious problem is often finding or devising operationalizations of concepts that really capture their meaning. In qualitative research, one of the most serious problems is figuring out concrete criteria for assigning cases to non-quantitative categories such as democratic or authoritarian. This lecture discusses strategies for dealing with the operationalization of abstract concepts and non-quantitative “measurement.”
Short in class assignment.
- *Przeworski, Adam and Henry Teune (1982), The Logic of Comparative Social Inquiry, Malabar, FL: RE Krieger, pp. 91-112
- Elkins, Zachary (2000), “Gradations of Democracy? Empirical Tests of Alternative Conceptualizations,” American Journal of Political Science 44, pp. 287-94
- Geddes, Paradigms and Sand Castles, pp. 142-73
Lecture 9: Testing Arguments That Posit Necessary Conditions
Testing arguments about necessary conditions requires different case selection criteria than does testing probabilistic arguments. In this lecture we discuss rigorous methods for testing arguments about necessary causes.
Instructions for the final paper will be passed out and discussed.
- Dion, Douglas (1998), “Evidence and Inference in the Comparative Case Study,” Comparative Politics 30, pp. 127-45.
- Baumoeller, Bear and Gary Goertz (2000), “The Methodology of Necessary Conditions,” American Journal of Political Science 44, pp. 844-58
- Geddes, Paradigms and Sand Castles, pp. 114-117
Lecture 10: Deciding What Approach Fits Your Topic: Rational Choice and Its Critics
Any good argument or theory needs to identify the actors that cause the action under study and describe why they act as they do. For topics in which the assumptions about human decision making that underlie the rational choice approach are not too implausible, rational choice offers a well understood tinplate for thinking through the logic of the particular argument. For some other topics (e.g., the individual formation of attitudes and values), previous research has created other standard explanations for why actors act as they do. In still others, the appropriateness of different approaches is contested. Approaches are not religions, to be embraced for life. Instead, the researcher should choose an approach that is appropriate to a particular topic, which depends on what assumptions about the relevant behavior seem plausible and on which aspects of a causal process the researcher wishes to focus.
- *Green, Donald and Ian Shapiro (1994), Pathologies of Rational Choice: A Critique of Applications in Political Science, New Haven: Yale University Press, pp. 13-46
- *Cox, Gary (1999), “The Empirical Content of Rational Choice Theory: A Reply to Green and Shapiro,” Journal of Theoretical Politics 11 (April) pp. 147-69
- Geddes, Paradigms and Sand Castles, chap. 5, pp. 175-211
Complete List of Readings for Research Design
- King, Gary, Robert Keohane, and Sidney Verba. 1994. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton: Princeton University Press.
- Geddes, Barbara. 2003. Paradigms and Sand Castles: Theory Building and Research Design in Comparative Politics. Ann Arbor: University of Michigan Press.
- Lijphart, Arend. 1975. “The Comparable-Cases Strategy in Comparative Research.” Comparative Political Studies 8 (July) 158-77
- Lieberson, Stanley. 1991. “Small N’s and Big Conclusions: An Examination of the Reasoning in Comparative Studies Based on a Small Number of Cases.” Social Forces 7 (December) 307-20
- Achen, Christopher and Duncan Snidal. 1989. “Rational Deterrence Theory and Comparative Case Studies.” World Politics 41 (January) 143-69
- Van Evera, Stephen. 1997. Guide to Methods for Students of Political Science. Ithaca: Cornell University Press, pp. 7-48
- Chamberlin, T. C. 1966. “The Method of Multiple Working Hypotheses,” Science148: 754-59
- Platt, John. 1964. “Strong Inference,” Science 146: 347-53
- Campbell, Donald and Julian Stanley. 1966. Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally, pp. 1-22, 34-43, 47-50, and 55-60
- Lieberman, Evan. 2001. “Causal Inference in Historical Institutional Analysis: A Specification of Periodization Strategies.” Comparative Political Studies 34: 1011-35.
- Alexander, Gerard. 2001. “Institutions, Path Dependence, and Democratic Consolidation,” Journal of Theoretical Politics 13: 249-70.
- Przeworski, Adam and Henry Teune. 1982. The Logic of Comparative Social Inquiry, Malabar, FL: RE Krieger, pp. 91-112
- Elkins, Zachary. 2000. “Gradations of Democracy? Empirical Tests of Alternative Conceptualizations.” American Journal of Political Science 44: 287-94
- Dion, Douglas. 1998. “Evidence and Inference in the Comparative Case Study,” Comparative Politics 30: 127-45.
- Braumoeller, Bear and Gary Goertz. 2000. “The Methodology of Necessary Conditions.” American Journal of Political Science 44: 844-58.
- Green, Donald and Ian Shapiro. 1994. Pathologies of Rational Choice: A Critique of Applications in Political Science. New Haven: Yale University Press, pp. 13-46
- Cox, Gary. 1999. “The Empirical Content of Rational Choice Theory: A Reply to Green and Shapiro.” Journal of Theoretical Politics 11 (April) 147-69
Barbara Geddes, who earned her Ph.D. from UC, Berkeley in 1986, has written about politics and breakdown in authoritarian regimes, bureaucratic reform and corruption, political bargaining over institutional choice and change, and research design. Her publications include Paradigms and Sand Castles: Theory Building and Research Design in Comparative Politics (2003), Politician’s Dilemma: Building State Capacity in Latin America (1994), “What Causes Democratization?” in The Oxford Handbook of Comparative Politics (2007), and a number of other articles. Her current research focuses on the effect of authoritarian interludes on the democratic party systems that emerge after transitions. She teaches Latin American politics, authoritarian politics, and research design at UCLA.