Hostname: page-component-8448b6f56d-dnltx Total loading time: 0 Render date: 2024-04-17T11:49:37.285Z Has data issue: false hasContentIssue false

Understanding Sample Surveys: Selective Learning about Social Science Research Methods

Published online by Cambridge University Press:  30 June 2010

Mary Currin-Percival
Affiliation:
University of Minnesota–Duluth
Martin Johnson
Affiliation:
University of California–Riverside
Rights & Permissions [Opens in a new window]

Abstract

We investigate differences in what students learn about survey methodology in a class on public opinion presented in two critically different ways: with the inclusion or exclusion of an original research project using a random-digit-dial telephone survey. Using a quasi-experimental design and data obtained from pretests and posttests in two public opinion courses, we test the hypothesis that students who participate in an original survey research project will have a stronger understanding of survey research methods than students who do not. To better assess the effect of the active learning element of the course, we estimate average treatment effects on the students who participated in the original survey project using nearest neighbor matching (Abadie et al. 2004) with student scores on a pretest. We find evidence of modest improvement in learning of survey methods in the course featuring the original survey research project; however, the major finding here is that a course featuring this kind of opportunity appeals to a different kind of student than a course that allows participants to stay closer to the classroom and library instead of the social science research laboratory. This discovery may have important implications for our understanding of the effects of active learning opportunities in other types of elective courses.

Type
The Teacher
Copyright
Copyright © American Political Science Association 2010

Political science educators often advocate service learning, experiential learning, and other forms of what is called active learning. While active learning in the discipline often focuses on political activities like internships (Pecorella Reference Pecorella2007), many articles in this journal have promoted a more engaged form of training in research methods that consists of designing a course around the conduct of an original survey research project (e.g., Cole Reference Cole2003; Hauss Reference Hauss2001; McBride Reference McBride1994). A number of these articles assert the pedagogical value of hands-on research training, but we have not seen a direct investigation of this claim. In this study, we investigated differences in what students learn about survey methodology in a class on public opinion presented in two critically different ways: with the inclusion or exclusion of an original research project, a random-digit-dial telephone (RDD) survey. The courses were conducted by the same instructor at the University of California, Riverside, in 2005 and 2008.

One of the critical issues we confronted with this assessment is that students have a great deal of freedom in their choice of courses. Consequently, we faced a problem of our research participants selecting themselves into and out of our quasi-experimental treatments (e.g., Barabas Reference Barabas2004; Rubin Reference Rubin2006). Students at this public university have a wide variety of options for classes in political science and other social science disciplines. Even among political science majors, students could have easily avoided the 2005 course offering and the effort entailed in fielding an RDD survey, such as placing telephone calls, collecting data, and analyzing this data for a term paper. Similarly, students interested in more hands-on research opportunities could easily have avoided the 2008 administration of the course, which was much more rooted in the discussion of research articles on public opinion.

We tested the expectation that research opportunities would improve learning outcomes in the domain of understanding survey research methods. This article is organized into five sections. First, we briefly review research on and advocacy of experiential education for undergraduate research methods. This prior literature informed our expectation that students would learn more about research methods in a course designed to include an original research project. Then, we describe the research design and the courses that informed the present investigation, focusing on a common entrance and exit assessment of student knowledge of survey research methods. In the third section of the paper, we present a preliminary analysis of class scores on these exams. Importantly, these results reveal the potential contamination of our findings with bias associated with student self-selection into these two types of classes. In the following section, we estimate average treatment effects using nearest neighbor matching (Abadie et al. Reference Abadie, Drukker, Herr and Imbens2004) in an effort to minimize the effects of this selection bias. Finally, we discuss our findings and their implications for understanding best practices in teaching research methods, as well as for scholarly research on learning outcomes.

EXPERIENTIAL LEARNING AND RESEARCH METHODS TRAINING

Active learning is different from traditional learning in that “rather than the teacher presenting facts to the students, the students play an active role in the learning by exploring issues and ideas under the guidance of the instructor” (Hamlin and Janssen Reference Hamlin and Janssen1987, 45). Active learning techniques can include class participation projects, such as activities involving hands-on experience or “demonstrations in which the students participate directly” (Kvam Reference Kvam2000, 136). Moreover, active learning techniques rely less on memorization of large quantities of information and instead encourage students to think about course material in new ways (Hamlin and Janssen Reference Hamlin and Janssen1987; McCarthy and Anderson Reference McCarthy and Anderson2000). Specific forms of active learning, such as service learning, community-based learning, and experiential learning, have been examined extensively in the biological and physical sciences (e.g., Miller and Groccia Reference Miller and Groccia1997), statistics (e.g., Kvam Reference Kvam2000), business (see Gosen and Washbush Reference Gosen and Washbush2004 for a review), sociology (see Mooney and Edwards Reference Mooney and Edwards2001 for a review), and education (e.g., Kolb Reference Kolb1984; Kolb and Kolb Reference Kolb and Kolb2005). Research on active learning in political science has become more prevalent; however, it is not as extensive as the research found in these other disciplines.

Experiential Learning and Social Science Research

Although a number of studies assume that students learn more from courses containing an active learning component, little research compares the effectiveness of active learning to the effectiveness of traditional or more “passive learning” techniques with respect to student performance (see DeNeve and Heppner Reference DeNeve and Heppner1997; for a review, see McCarthy and Anderson Reference McCarthy and Anderson2000). While they did not directly compare the two types of teaching methods, Hamlin and Janssen (Reference Hamlin and Janssen1987) compared exam scores and “a professionally developed essay-style student evaluation form” for students in introductory sociology courses taught in two quarters using an active-learning method and students taught in two quarters using a traditional lecture-exam based method. They found that students did slightly worse on the “concept-definition type of test” under the active learning method; however, based on their subjective evaluation forms, students in the active learning courses acquired skills that reflected a more sociological way of thinking (Reference Hamlin and Janssen1987, 51).

In more direct comparisons of active learning to traditional learning techniques, the effectiveness of active learning techniques with respect to student performance has proved “ambiguous” (DeNeve and Heppner Reference DeNeve and Heppner1997; Miller and Groccia Reference Miller and Groccia1997). However, one recent study (McCarthy and Anderson Reference McCarthy and Anderson2000) comparing active and traditional teaching techniques in political science and history courses showed evidence to support the claim that active learning techniques improve student performance. In the political science course, the authors conducted an experiment in which two classes learned about question wording in public opinion polls; one class learned this concept as part of an in-class collaborative exercise, the other as part of a traditional classroom lecture. A week later, students in both classes were tested on their knowledge of the nuances of writing good poll questions (McCarthy and Anderson Reference McCarthy and Anderson2000, 288). Students in the collaborative, active learning course performed better on this test than did the students who received the lecture-based instruction. For the history component, the authors' experiment involved the use of a role-playing exercise to teach students about North American history and multiculturalism. Two sections of a large history class were instructed using traditional teaching methods, and three sections were instructed using the role-playing exercise (the same course material was presented in all sections). The students in the role-playing sections performed better than the students in the traditional teaching sections on an essay exam about multiple cultures in the New World (McCarthy and Anderson Reference McCarthy and Anderson2000, 289). While their results suggest that active learning improves student performance, McCarthy and Anderson are cautious in extrapolating their conclusions, because the experimental design was not “truly ‘scientific,’” in that it did not involve the use of random samples (Reference McCarthy and Anderson2000, 289).

Experiential Learning and Social Science Methodology

Research on active learning in political science has examined topics as varied as how the use of simulations affects learning in a comparative politics course (Shellman Reference Shellman2001), how experiential learning in the form of a public service fellowship program enhances graduate education in a doctoral program (Marando and Melchior Reference Marando and Melchior1997), and whether a collaborative exercise helps students learn more about the importance of question wording in public opinion polls (McCarthy and Anderson Reference McCarthy and Anderson2000).

Several studies have examined the effects of active learning in political science courses that incorporate an original survey project. When investigating the effects of employing a survey project, political science instructors often focus on other potential educational outcomes, such as improving student community engagement, and their interest in politics, rather than how much the experience improves knowledge or understanding of research methods. Jones and Meinhold (Reference Jones and Meinhold1999) found that conducting a survey does little to improve civic engagement. However, students involved in a multisite exit poll project experienced increased interest in studying political science and electoral politics (Cole Reference Cole2003).

Inattention to improvements in student knowledge of political science research methods may result from the strong shared assumption that hands-on learning improves student understanding of research design, data collection, and analysis. For example, McBride notes, “[I]t is my belief that students gain considerably from this hands-on approach to research. While grades may not be higher, the experience of designing research, composing a questionnaire, collecting and eventually analyzing data, cannot but help students to increase their understanding of the social scientific process” (Reference McBride1994, 557). Similarly, Jones and Meinhold recognize potential shortcomings associated with conducting original survey projects in their classes, but note that “teachers who use experiential learning in their instruction rarely doubt its efficacy and often recommend its use” (Reference McCarthy and Anderson1999, 603). Additionally, the value of conducting a survey is implicit in Hauss' (Reference Hauss2001) discussion of a national survey conducted by George Mason University undergraduates.

Cole (Reference Cole2003) assesses how involvement in a research project affects student perceptions of their methods education. She asked students to evaluate their own learning in a class that fielded an exit survey and found that students felt they understood both the substance of the course they were enrolled in and survey research methods better as a result of the project. However, this research design did not address the counterfactual: Would the students have experienced similar gains in knowledge without survey fieldwork? That is, did the survey itself effect a greater methodological understanding among these students? This is the question we address with a quasi-experimental design intended to detect the pedagogical benefits of conducting an original survey research project in a course focused on mass media and public opinion.

Expectations

Our principal hypothesis was that students who participate in an original survey research project would have a stronger understanding of survey research methods than students who did not, other things being equal. However, the null expectation that hands-on experience matters little is a strong alternative for a variety of reasons. It could certainly be the case that any methods training offers improvements to students' knowledge of research methods. Similarly, students may learn little about methods in both experiential and literature-based teaching environments. Our expectation that direct, experiential learning would have pedagogical benefits for students was perhaps hopeful. We tested this expectation using a pretest/posttest quasi-experiment fielded over the span of two similar classes.

METHODS

The Class: Mass Media and Public Opinion

In the spring semesters of 2005 and 2008, the same instructor offered substantively similar courses under the catalogue title “Mass Media and Public Opinion.” The spring 2005 administration of the course was supported by an instructional teaching development grant that funded fieldwork for an original survey research project using the University of California, Riverside, computer-assisted telephone interviewing (CATI) facility. The grant funded the purchase of a random-digit-dial (RDD) sample and telephone toll charges, and supported a graduate assistant to supervise fieldwork. The primary responsibility of the graduate assistant was to supervise call center shifts and facilitate the survey project. Undergraduate students were informed at the beginning of the course that they would work on this project as part of the class. These students—82 completed the course—collaborated with the instructor and graduate assistant on all elements of the project: developing research questions, selecting a sampling frame, writing the questionnaire, preparing the project proposal for the university's institutional review board, fielding the survey, and analyzing the data collected. The final paper for the course required students to write up a rudimentary analysis of the data.

In spring 2008, the instructor offered the same course to 99 students. This version of the class did not include an original survey research project. Instead, the course culminated in each student writing a literature review investigating social science research on an empirical question of his or her choosing. Lectures for the 2008 course were largely informed by lectures from the 2005 course, and both courses featured a similar set of readings. However, the courses were somewhat different beyond the survey research component. For example, the 2008 course enjoyed the support of a reader/grader rather than a more involved graduate assistant, because there was no survey project to supervise in 2008. It should be noted that the graduate assistant in the 2005 course was necessary for the survey project. In the absence of the graduate assistant, the professor would have had more responsibilities, such as supervising the CATI call center. Moreover, the 2005 course required more extensive methodological readings, including an excellent and reasonably accessible text on survey research (Weisberg, Krosnick, and Bowen Reference Weisberg, Krosnick and Bowen1996). Beyond these differences, the courses were similar in terms of organization and student expectations. Although we acknowledge that these other differences require a judicious interpretation of our findings, we stress that the primary difference between the two courses was the inclusion or exclusion of the original research project. Student characteristics are displayed in table 1.

Table 1 Aggregate Descriptive Characteristics of Students

Entrance and Exit Assessments of Survey Methods Knowledge

To assess the efficacy of the active learning component, students in both of the classes completed a brief entrance exam, with the same test repeated as an exit exam at the conclusion of the course. The exam asked students a variety of questions about scientific sampling and survey questionnaire construction to test students' knowledge and understanding of concepts. Some questions tested factual knowledge (e.g., identifying a sample, the possibility of representativeness), while others were designed to ascertain students' deeper understanding of the material or their ability to apply the material learned in the course (e.g., explaining the idea of a random sample, representativeness, margin of error, question context effects, and double-barreled questions). The questions are provided in the appendix. The course instructor graded the entrance and exit exams. The test included one fill-in-the blank item, graded as correct (1) or incorrect (0). The test also included one yes or no question, also graded as correct or incorrect. The other items were unstructured, allowing students to write brief statements in reply to the question. Responses were graded on a scale: Students who wrote nothing or wrote something that indicated no familiarity with the topic were scored 0. Students who indicated at least a passing familiarity with the topic scored 0.5. Students who indicated an understanding of the topic were scored 1. We summed these scores for each test item, creating a cumulative test score with a potential range from 0 to 7.

Table 2 shows the average score in each class for items on the entrance and exit exams. On average, across these items, students in the 2005 class were most familiar with the term sample, and were more familiar with this term than was the 2008 class. In the 2008 class, students performed best on the item that asked whether a sample could be representative of a population under investigation. However, students in 2005 still showed more familiarity with this concept than did students in the 2008 course. In fact, across all but two items, students in the 2005 class had higher scores on each entrance and exit exam test item than students in the 2008 class.

Table 2 Average Entrance and Exit Item Scores

Figure 1 shows aggregate class averages on the entrance and exit exams. In this graph, we present results based on all exit and entrance test responses. In the spring 2005 class featuring the RDD survey, 74 students completed the entrance exam, with an average score of 2.7. At the end of the course, 60 students completed the exit test, with an average score of 4.6. In the aggregate, then, the average improvement was 1.9 points. Among the 55 students who completed both tests in 2005, the average student scored 1.9 points higher on the exit test. In the spring 2008 course with the literature review term paper, the average score on the entrance test (among 76 students) was 1.9. On the exit test, completed by 84 students, the average score rose to 3.5. Across all test-takers, this increase represents a 1.6-point improvement. Restricting this sample to the 68 students with both entrance and exit scores in 2008, the average student improved by 1.8 points.

Figure 1 Average Scores on Entrance and Exit Tests, 2005 and 2008

We were surprised to find that students in both classes improved by roughly the same amount. In both classes, students on average scored two more correct answers on the exit test, compared to the entrance test. Importantly, students in the 2005 class had an overall higher level of survey methodology knowledge upon entering the course, suggesting that students who participated in the course with the original survey project were a different kind of student than the type who chose to participate in the literature-based course. After investigating change on individual items on the entrance and exit exams, we focus on the problem of self-selection into these classes.

DATA AND PRELIMINARY RESULTS

For each item on the entrance and exit test, we computed a difference score, subtracting each student's score on a given entrance exam item from his or her score on the corresponding exit exam item. Table 3 shows the average change score on each item for each class, as well a t-test of the hypothesis that the average student improvement in the 2005 class would be different from the student improvement in the 2008 class. On the items investigating student knowledge of sampling, these differences tend not to reach conventional levels of statistical significance. However, on the item asking students to explain why a relatively small random sample can represent a large population, we see substantially larger improvements among students in the hands-on survey research course than among students in the literature-based course. Similarly, more students in the active learning course were able to explain the concept of margin of error better than students in the 2008 class.

Table 3 Change between Entrance and Exit Test Scores, Difference of Means Tests

Note.

** p < .01,

* p < .05,

p < .10 (two-tailed tests)

The items that explicitly focused on questionnaire design provided an odd pattern of results. More students in the 2005 hands-on course learned to identify context effects in survey question ordering. On the other hand, more students in the 2008 class learned about double-barreled questions. These results may be due, at least in part, to the fact that students entering the 2005 class had a higher level of understanding of doubled-barreled questions than students entering the 2008 course. The entrance test average score for this item in 2005 was 0.35. The entrance test average for this item in 2008 was 0.21, significantly lower than the earlier score (t = 3.24, p < .001).

Among students with both entrance and exit test scores, we see no significant difference in overall change in test scores. The average student in the active learning version of the class improved 1.9 points. In the later literature-oriented version of the class, the average student improved 1.8 points. This difference is not substantively or statistically significant (t = 0.60). However, on individual items related to both sampling and questionnaire construction, we see big differences between the two classes, suggesting that the active learning component enhanced learning outcomes.

THE PROBLEM OF SELF-SELECTION

Based on previous findings on the differences in learning styles of students in political science classes, we might expect students to self-select into or out of a class based on their own learning preferences. Fox and Ronkowski (Reference Fox and Ronkowski1997) have examined the learning styles of students in political science courses using Kolb's (Reference Kolb1984; Reference Kolb1985) Experiential Learning Cycle and Learning Style Inventory (LSI). This scaling device identifies types of learners such as accommodators and assimilators. Assimilators learn by combining “abstract conceptualization and reflective observation” (Fox and Ronkowski Reference Fox and Ronkowski1997, 734), while accommodators prefer active experimentation and concrete experience.

As a major, political science attracts “a higher number of assimilators than any other type of learners” (Fox and Ronkowski Reference Fox and Ronkowski1997, 734). Fox and Ronkowski find further that women are more likely than men to be accommodator-type learners, and they express concern that “this may put women students at a disadvantage in political science classes, since the accommodator prefers active experimentation and concrete experience while most political science courses cater to abstraction and reflection, thus favoring the learning styles of the male students” (734). Two other major types of learners are convergers, who prefer to combine “abstract conceptualization and active experimentation,” and divergers, who prefer to combine “concrete experience and reflective observation” (733–34). The study found that juniors and seniors are more likely to be convergers and assimilators, while freshmen and sophomores are more likely to identify themselves as accommodators and divergers (735).

Given the differences in learning preferences of political science students, the issue of self-selection must be addressed further. As noted previously, students at the university hosting this quasi-experiment have a great deal of latitude in selecting classes. This particular research methods course was not a required class, but it attracted healthy enrollments. Nonetheless, the fact that students were informed of the intensive original research process at the beginning of the 2005 class may have contributed to decisions to enroll in this course. The entrance test scores for the 2005 course were on average one item higher than the entrance scores for the 2008 class. A similar gap persists on the exit test, with approximately one correct answer difference between the two classes. Our interpretation of this pattern of scores is that a different type of student, more interested in methods of social science research, selected themselves into the 2005 class. During the last meeting of the 2008 course, the instructor informed students of the structure of the 2005 class and asked them whether they would prefer a course with an original survey project or a literature-based class such as the one they took. Only 35% of the 2008 students expressed a preference for the more active learning experience. This finding further suggests an underlying difference between participant types in the two courses.

Consequently, to better assess the effect of the active learning element of this public opinion course on students, we estimated average treatment effects on the treated—the students who participated in the original survey project—using nearest neighbor matching (Abadie et al. Reference Abadie, Drukker, Herr and Imbens2004). Matching techniques are receiving interest and increased use in political science and policy research, including but not limited to research on program evaluation (Atzeni and Carboni Reference Atzeni and Carboni2008), institutional differences (Kousser and Mullin Reference Kousser and Mullin2007), and media effects (Spader et al. Reference Spader, Ratcliffe, Montoy and Skillern2009). A number of scholars have articulated the underlying logic of these techniques (e.g., Rosenbaum and Rubin Reference Rosenbaum and Rubin1985). Essentially, researchers prefer to assess the effects of an experimental intervention by examining an intervention's outcome for a person as well as the outcome for that same person absent the intervention. Clearly, this is impossible to execute. Consequently, the best way to assess the effects of an intervention is to create a true experiment in which participants are assigned at random to treatment or control conditions. This situation allows the researcher to examine outcome differences between the two populations to estimate a treatment effect. However, we often find ourselves analyzing data from research conducted without random assignment to conditions. Social scientists are increasingly cognizant that our research subjects often exercise a great deal of choice in engaging in our experimental manipulations.

Our present research design suffers from this very problem: students had a great deal of choice in selecting these classes. Thus, although the classes are different, the fact that students could sort themselves into or select out of a research-intensive course implies that the means tests, which straightforwardly compare the classes, presented in table 3 may be biased. Matching allows researchers to empirically construct hypothetical test and control groups and explore the effects of treatments on the most similar test participants. Thus, we can assess what we might have seen if more similar students had taken the two classes.

We treated the research intensive class as an experimental treatment and, given that a student chose one class or the other, estimated logistic regression-based propensity scores. These scores indicate the probability that a given student took the research intensive class instead of the less applied research class as a function of his or her score on the entrance exam, cumulative grade point average in the quarter before joining the class, standing as a senior-level student or not, participation in the political science major or not, and gender. We computed average treatment effects on the treated, the students in the course with the original survey project, matching on the basis of these propensity scores, and with 3:1 nearest neighbor matching with replacement (see Abadie et al. Reference Abadie, Drukker, Herr and Imbens2004), allowing as many as three hypothetical matches for comparisons between the treatment and control groups. This procedure reduces bias in the estimation of the treatment effect, but it can increase the variance in estimators.

Table 4 shows the average treatment effect on the treated for the active-learning class at the question level, as well as on the overall test. Taking self-selection into consideration, we find that participants in the 2005 class improved more than participants in the 2008 class in their ability to explain representative sampling (p < .01), margin of error (p < .05) and question context effects (p < .05). Table 3 also shows that participants in the 2008 class saw more improvement between entrance and exit exams on the item about double-barreled questions than did students in 2005. When we match students on the basis of their underlying characteristics and estimate a less biased treatment effect, this difference fails to reach conventional levels of statistical significance. Importantly, once we take the population differences between the two classes into account, we are able to see a significant difference between the two classes. These results suggest that the research-oriented class had incremental but significant effects on learning of survey research methods. Perhaps a corollary to this finding is that if a student who was less engaged with research methods at the beginning of the class had taken the more research-oriented class, he or she would have learned incrementally more about survey research methods than he or she did in the class with less direct exposure to research practices.

Table 4 Average Treatment Effect of Active Learning Class, with Nearest Neighbor Matching

Note.

** p < .01,

* p < .05

It is also important to note the statistically significant differences between the two classes on questions ascertaining students' deeper understanding of the course material or ability to apply the material learned in the course (e.g., explaining a random sample, representativeness, margin of error, and question context effects). In other words, perhaps the active learning component helps students think more like social scientists. They do more than simply memorize and reiterate facts; they gain a more solid grasp of these facts and concepts and gain the ability to apply them correctly to actual problems.

DISCUSSION

We acknowledge that there were differences between the two courses in addition to the RDD survey project; however, these differences were minor in comparison to the primary difference between the classes—that is, the inclusion of the original survey project as an active learning component. Although we are cautious in our conclusions, we argue that this study does add to our understanding of how active learning techniques affect student performance. In sum, we find evidence of modest improvement in learning of survey methods in a course featuring an active learning component in the form of an original survey research project. This finding is significant because it suggests, via analysis of a quasi-experiment, that active learning opportunities can be more effective than traditional lecture classes in teaching research methodology to undergraduate students, which is an intuitive concept to many people, but had not yet been demonstrated in this particular way. However, we also note that one major finding here is that a course featuring this kind of hands-on opportunity appeals to a different kind of student than does a traditional course that allows participants to stick closer to the classroom and library than the social science research laboratory. We attribute some of the differences between entrance and exit test performance in the two classes to the active learning opportunity, but, given the observable differences between these populations of students, we felt compelled to bolster our inferences with the application of contemporary methods for identifying treatment effects in light of self-selection and nonrandom assignment to conditions.

The finding that courses including hands-on learning opportunities appeal to some students while more traditional lecture-based courses appeal to others has implications for our understanding of the effects of active learning opportunities on other types of elective courses. If a certain type of student wants an internship or wishes to take a course focused on community engagement through activities outside of the classroom, it may become difficult to associate changes in that student's level of civic engagement with the particular learning opportunity.

We echo other scholars' (e.g., Fox and Ronkowski Reference Fox and Ronkowski1997) conclusion that faculty should vary their teaching methods and instructional activities to optimize learning opportunities for students. Some students thrive in traditional lecture-based courses; however, as demonstrated here, some students learn more in a course that contains a hands-on opportunity to apply course material. Students might gain a different set of skills from participating in both types of activities.

Moreover, given the differences in the populations served by these classes, we might also more appropriately infer the importance of developing a sequence of courses in research methods. Students might best be served by a traditional literature-based course designed to inform their underlying knowledge of methodological concepts, bringing them from a relatively low level of knowledge and understanding to a moderate or intermediate level. With this background, students will be prepared to gain more from the more in-depth research experience and the opportunity to conduct original research, and they may even be more enthusiastic to apply and further expand their knowledge.

APPENDIX: Survey Methodology Entrance/Exit Exam

Identify sample. There are more than 295 million residents of the U.S. So it is prohibitively expensive to survey their opinions on most matters. As a result, most public opinion researchers rely on a ________________ of the population? [fill-in]

Explain random sample. When people who conduct “scientific” surveys say they have selected participants “at random,” what do they mean? [open-ended]

Possibility of representativeness. Most “scientific” public opinion surveys collect data from 500–3,000 respondents. Do you think that data collected from this number of people can reflect the opinions of residents of a state like California (35 million people)? [yes/no]

Explain representativeness. Why or why not? [open-ended]

Margin of error. If we were to conduct a survey of U.S. residents with 800 respondents, the margin of error would be ±3.46%, with a 95% confidence interval. What does a ±3.46% margin of error mean in practical terms? [open-ended]

Question context effects. Imagine we are conducting a survey and want to assess the job President Bush is doing in office. On our survey, we ask the following questions in this order:

Question 12. When it comes to politics, do you think of yourself as a Democrat, a Republican, an Independent, or a member of another party? 1. Democrat; 2. Republican; 3. Independent; 4. Other party; 8. Don't Know (DON'T READ); 9. Refused (DON'T READ)

Question 13. On a different subject, do you approve or disapprove of the job George W. Bush is doing as president? 1. Approve; 2. Disapprove; 8. Don't Know (DON'T READ); 9. Refused (DON'T READ)

What are the implications of asking these questions in this particular order? Specifically, what do you think asking these questions in this order would do to the relationship between observed presidential evaluations and observed partisanship? [open-ended]

Double-barreled questions. Take a look at the following survey question:

Question 17. Do you think President Bush is doing a good job or a bad job in dealing with Social Security and the war on terror? 1. Good job; 2. Bad job; 8. Don't Know (DON'T READ); 9. Refused (DON'T READ)

Do you see any problems with this? If so, what is your major concern? [open-ended]

References

Abadie, Alberto, Drukker, David, Herr, Jane Leber, and Imbens, Guido W.. 2004. “Implementing Matching Estimators for Average Treatment Effects in Stata.” Stata Journal 4 (3): 290311.CrossRefGoogle Scholar
Atzeni, Gianfranco E., and Carboni, Olivero A.. 2008. “The Effects of Grant Policy on Technology Investment in Italy.” Journal of Policy Modeling 30: 381–99.CrossRefGoogle Scholar
Barabas, Jason. 2004. “How Deliberation Affects Policy Opinions.” American Political Science Review 98: 687701.CrossRefGoogle Scholar
Cole, Alexandra. 2003. “To Survey or Not to Survey: The Use of Exit Polling as a Teaching Tool.” PS: Political Science and Politics 36 (2): 245–52.Google Scholar
DeNeve, Kristina M., and Heppner, Mary J.. 1997. “Role Play Simulations: The Assessment of an Active Learning Technique and Comparisons with Traditional Lectures.” Innovative Higher Education 21 (3): 231–46.CrossRefGoogle Scholar
Fox, Richard L., and Ronkowski, Shirley A.. 1997. “Learning Styles of Political Science Students.” PS: Political Science and Politics 30 (4): 732–37.Google Scholar
Gosen, Jerry, and Washbush, John. 2004. “A Review of Scholarship on Assessing Experiential Learning Effectiveness.” Simulation and Gaming 35 (2): 270–93.CrossRefGoogle Scholar
Hamlin, John, and Janssen, Susan. 1987. “Active Learning in Large Introductory Sociology Courses.” Teaching Sociology 15 (1): 4554.CrossRefGoogle Scholar
Hauss, Charles. 2001. “Freshmen Conduct a National Survey.” PS: Political Science and Politics 34 (2): 306–07.Google Scholar
Jones, Lloyd P., and Meinhold, Stephen S.. 1999. “The Secondary Consequences of Conducting Polls in Political Science Classes: A Quasi-Experiment.” PS: Political Science and Politics 32 (3): 603–06.Google Scholar
Kolb, Alice Y., and Kolb, David A.. 2005. “Learning Styles and Learning Spaces: Enhancing Experiential Learning in Higher Education.” Academy of Management Learning and Education 4 (2): 193212.CrossRefGoogle Scholar
Kolb, David A. 1984. Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
Kolb, David A. 1985. Learning Style Inventory. Boston: McBer and Co.Google Scholar
Kousser, Thad, and Mullin, Megan. 2007. “Does Voting by Mail Increase Participation? Using Matching to Analyze a Natural Experiment.” Political Analysis 15: 428–45.CrossRefGoogle Scholar
Kvam, Paul H. 2000. “The Effect of Active Learning Methods on Student Retention in Engineering Statistics.” American Statistician 54 (2): 136–40.Google Scholar
Marando, Vincent L., and Melchior, Mary Beth. 1997. “On Site, Not Out of Mind: The Role of Experiential Learning in the Political Science Doctoral Program.” PS: Political Science and Politics 30 (4): 723–28.Google Scholar
McBride, Allan. 1994. “Teaching Research Methods Using Appropriate Technology.” PS: Political Science and Politics 27 (3): 7172.Google Scholar
McCarthy, J. Patrick, and Anderson, Liam. 2000. “Active Learning Techniques versus Traditional Teaching Styles: Two Experiments from History and Political Science.” Innovative Higher Education 24 (4): 279–94.CrossRefGoogle Scholar
Miller, Judith E., and Groccia, James E.. 1997. “Are Four Heads Better Than One? A Comparison of Cooperative and Traditional Teaching Formats in an Introductory Biology Course.” Innovative Higher Education 21 (4): 253–73.CrossRefGoogle Scholar
Mooney, Linda A., and Edwards, Bob. 2001. “Experiential Learning in Sociology: Service Learning and Other Community-Based Learning Initiatives.” Teaching Sociology 29 (2): 181–94.CrossRefGoogle Scholar
Pecorella, Robert F. 2007. “Forests and Trees: The Role of Academics in Legislative Internships.” Journal of Political Science Education 3: 7999.CrossRefGoogle Scholar
Rosenbaum, Paul R., and Rubin, Donald B.. 1985. “Constructing a Control Group Using Multivariate Matched Sampling Methods that Incorporate the Propensity Score.” American Statistician 39: 3338.Google Scholar
Rubin, Donald B. 2006. Matched Sampling for Causal Effects. New York: Cambridge University Press.CrossRefGoogle Scholar
Shellman, Stephen M. 2001. “Active Learning in Comparative Politics: A Mock German Election and Coalition-Formation Simulation.” PS: Political Science and Politics 34 (4): 827–34.Google Scholar
Spader, Jonathan, Ratcliffe, Janneke, Montoy, Jorge, and Skillern, Peter. 2009. “The Bold and Bankable: How the Nuestro Barrio Telenovela Reaches Latino Immigrants with Financial Education.” Journal of Consumer Affairs 43: 5679.CrossRefGoogle Scholar
Weisberg, Herbert, Krosnick, Jon A., and Bowen, Bruce. 1996. Introduction to Survey Research, Polling, and Data Analysis. Thousand Oaks, CA: Sage.Google Scholar
Figure 0

Table 1 Aggregate Descriptive Characteristics of Students

Figure 1

Table 2 Average Entrance and Exit Item Scores

Figure 2

Figure 1 Average Scores on Entrance and Exit Tests, 2005 and 2008

Figure 3

Table 3 Change between Entrance and Exit Test Scores, Difference of Means Tests

Figure 4

Table 4 Average Treatment Effect of Active Learning Class, with Nearest Neighbor Matching