Hostname: page-component-8448b6f56d-cfpbc Total loading time: 0 Render date: 2024-04-19T00:41:31.978Z Has data issue: false hasContentIssue false

Learning by Doing: Mentoring Group-Based Undergraduate Research Projects in an Upper-Level Political Science Course

Published online by Cambridge University Press:  26 January 2016

Benjamin R. Knoll*
Affiliation:
Centre College
Rights & Permissions [Opens in a new window]

Abstract

Undergraduate research (UGR) is a “high-impact practice” that has been consistently shown to effectively promote desirable student-learning outcomes (SLOs) including critical thinking, logic, written and oral communication, problem solving, and interpretation of evidence, especially among minority and disadvantaged students. Mentoring quality UGR experiences in regular upper-level political science courses, however, is a difficult and time-consuming activity. This article describes an attempt to provide an intensive, semester-long, and group-based UGR experience in an upper-level American politics course. It discusses how this experience was designed to deliberately foster specific institutional UGR SLOs and summarizes student perceptions of the overall effectiveness of the experience.

Type
The Teacher
Copyright
Copyright © American Political Science Association 2016 

The January 2015 issue of PS: Political Science and Politics featured a symposium titled “Research and Undergraduate Teaching: A False Divide?” The symposium focused on why and how political science faculty might incorporate undergraduate research (UGR) components into their courses. Symposium authors discussed the benefits of doing this for faculty, including increased scholarly productivity and the opportunity to generate novel research agendas (see Druckman Reference Druckman2015).

In addition to the benefit that this pedagogical approach can offer to faculty, UGR has been consistently shown to be a “high-impact practice” (Kuh Reference Kuh2008) in terms of important student-learning outcomes (SLOs) such as critical thinking, logic, written and oral communication, problem solving, and interpretation of evidence (Bauer and Bennett Reference Bauer and Bennett2003; Campbell and Skoog Reference Campbell and Skoog2004; Hakim Reference Hakim1998; Hu et al. Reference Hu2008; Lei and Chuang Reference Lei and Chuang2009; Lopatto Reference Lopatto2004; Mabrouk Reference Mabrouk2009; Russell, Hancock, and McCullough Reference Russell, Hancock and McCullough2007; Seymore et al. Reference Seymore, Hunter, Laursen and DeAntoni2004). This has been shown to be the case not only for those in physical sciences but also those in the social sciences and humanities (Ishiyama Reference Ishiyama2002). It is particularly important that the benefits of UGR experiences in all of these domains are especially evident among minority and disadvantaged students (Ishiyama Reference Ishiyama2001; Jonides Reference Jonides1995; Pascarella and Terenzini Reference Pascarella and Terenzini2005; Summers and Hrabowski Reference Summers and Hrabowski2006).

Despite the obvious benefits that UGR can provide to both faculty and students, there often are significant hurdles for faculty to overcome to be able to offer meaningful UGR experiences in their courses. A most significant obstacle is time: mentoring quality UGR experiences often requires more time and effort (in an already busy semester) than more traditional types of written or research assignments. Given this reality, the Council on Undergraduate Research recommends that an effective way to offer UGR experiences is to simply replace more-traditional course modules, assignments, and content (which perhaps have a less effective impact on desired learning outcomes) with UGR experiences proven to have stronger impacts on SLOs (Shanahan Reference Shanahan, Hensel and Paul2012).

The objective of this article is to describe and reflect on my attempt to offer a substantive, rigorous UGR component to an existing upper-level political science course. I describe the motivation, planning, and execution of the project, and I reflect on the effectiveness of this particular experience. The hope is that it may offer ideas or serve as a model for other faculty who want to incorporate more UGR experiences into their upper-level courses.

UGR PROJECT PLANNING AND ORGANIZATION

In the Fall 2014 semester, 17 students enrolled in my upper-level “Parties, Campaigns, and Elections” course at the liberal arts college where I work in Central Kentucky. I had taught this course previously but was unsatisfied with the rigor of the out-of-class written assignments. (Course evaluations revealed that my students were equally unimpressed.) In response, I wanted to improve the out-of-class component and decided to replace the previous set of assignments with a meaningful and substantive UGR experience. I previously had not attempted this in a semester-long course, so it was essentially an experiment. For guidance, I consulted the formalized UGR SLOs that my institution had recently adopted. I designed the project with those outcomes in mind, which include the following:

  1. 1. Identify and apply the tools, methods, sources, concepts, and ethical standards appropriate to the discipline.

  2. 2. Communicate—both orally and in written form and to diverse audiences—the ways in which their intellectual or creative work contributes to broader frameworks, expressions, and discussions in the discipline.

  3. 3. Effectively connect multiple ideas and approaches to bring new insights to questions at hand, overcome barriers, and develop an appreciation for the complexity and ambiguity inherent in the research process.

  4. 4. Reflect on their work to assess progress toward personal, career, or post-graduation goals.

  5. 5. Work independently while also identifying when input, guidance, and feedback are needed.

The “vision” for the project was introduced to my students on the first day of class. Footnote 1 I explained that we would do an “undergraduate research project” and that, organized into research teams, we would empirically investigate an important political science question from a variety of different perspectives. The question posed to my students was: “Why are there more ticket-splitters in Kentucky than in other states? Footnote 2 ” I told them that this is a question I had been thinking about since I moved to the state five years before and that although I had some ideas, I still did not have a satisfactory answer. In other words, the explicit framing for the project was: “Here is a puzzle about how people in Kentucky vote: they vote differently in federal elections than they do in state and local elections. Why? I’m not sure! Even your professor doesn’t have all of the answers. This semester we are going to apply social science tools to answer an interesting and important question about politics here in our community.”

We would present their research results at the Kentucky Political Science Association (KPSA) conference to be held the following spring. Thus, from “Day One,” this was a serious research endeavor for a public audience that would accomplish more than simply earning a grade for the course.

I also informed the students that the project would have an audience outside of the classroom. We would present their research results at the Kentucky Political Science Association (KPSA) conference to be held the following spring. Thus, from “Day One,” this was a serious research endeavor for a public audience that would accomplish more than simply earning a grade for the course.

The first task was to assign students to different research teams. I divided the 17 students into five teams of three or four students each. Each team would be responsible for creating one final product: a research paper investigating the question of Kentucky ticket-splitting. This gave students an opportunity to work in a collaborative setting with one another (and reduced the number of projects for me to supervise from 17 to five). The groups were assigned on the basis of which methodological approach they would take to our collective research question. I determined them in advance and cast a broad array of both quantitative and qualitative approaches to both primary and secondary evidence. Group 1 would perform a quantitative analysis of local exit-poll survey data. Group 2 would perform a quantitative analysis of secondary data sources. Group 3 could perform either a quantitative or qualitative analysis of historical evidence (or both). Group 4 would perform a qualitative analysis of interviews with political elites, scholars, and journalists in Kentucky. Group 5 would perform a qualitative analysis of interviews with “average” Kentucky voters.

It is important to note that most students had previously taken an empirical analysis course from the political science, history, or economics department. Thus, I assumed that they had basic social science analysis skills and that significant class time on the specifics of writing literature reviews, citing sources, analyzing evidence, and so on would not be necessary. (I recommended chapters from the Baglione Reference Baglione2011 text for various parts of the project if they needed a refresher.) I ensured that at least one or two students in each group had previously taken one of these courses; they then were assigned to groups based on their skill set and their indicated first and second preferences. I emphasize that I could not have executed the project in the same way if students did not have at least a basic familiarity with empirical research methods. (See Elman, Kapiszewski, and Kirilova Reference Elman, Kapiszewski and Kirilova2015, who discuss the role of methods training for undergraduates for success in subsequent UGR experiences.)

After the groups were organized, one student in each group was designated as the “team leader” responsible for submitting assignments and organizing team efforts. We used one class session to discuss group dynamics: managing time, being proactive and reliable participants, avoiding the “free-rider” temptation, and so on. Footnote 3

UGR PROJECT EXECUTION

The first step in the project was for each team to produce a literature-review essay. Each group investigated the “ticket-splitting” phenomenon to learn what political scientists already know about why people ticket-split, as well as why this might be more prevalent specifically in Kentucky than in other states. The groups were given significant leeway in how they decided to collectively draft and write the literature-review essays (see UGR SLO #5).

After the five essays were submitted, I compiled the various “schools of thought” on ticket-splitting from each essay to review as a class. Students had identified 11 distinct explanations and/or hypotheses from their review of the literature, including candidate-centered voting, partisan ambivalence and cross-pressures, political sophistication and socioeconomic dynamics, balancing theory, off-year election effects, post–Civil War Reconstruction effects, and the presence of strong coal mining unions in Kentucky. In class, we summarized each potential explanation and then collectively discussed which methodological approach would be best suited to test specific hypotheses (see UGR SLO #3). This also gave students the opportunity to consider the strengths and weaknesses of quantitative versus qualitative approaches (see UGR SLO #1). After deciding the most appropriate hypothesis for each methodological approach, I assigned them to one or two specific groups in charge of that particular approach(es).

When this step was completed, students then had approximately six weeks to finish the “research-design” and “data-and-analysis” steps of their project (i.e., the former step was due about a month before the latter step). Again, because most students had already taken an empirical-analysis course, I did not use much class time on instruction for these assignments. I was consistently available outside of class, however, to consult with students as they worked on the assignments (see UGR SLO #5). Footnote 4

Despite occasional snags and delays in the process, each group produced a high-quality original UGR paper that used empirical evidence to test a clear social science hypothesis derived from political science literature.

After students had performed the data-analysis part of their project and drafted their paper, the next step was to engage in a “peer review” of one another’s full essay. This provided an opportunity to discuss the norms of peer review in academia as well as for hands-on experience (see UGR SLO #1). Each research team was responsible for writing a detailed critique of three other groups’ papers.

The final portion of the project in the last week of the semester was for each team to make a formal oral presentation of their findings to the entire class. This was an opportunity to discuss professional “conference-paper presentation” norms as well as to gain practice for the presentation that they would make the following spring at the KPSA conference (see UGR SLOs #1 and #2). Each team received a grade according to an oral-presentation rubric.

After the presentations, we spent one class in a “debriefing” session to hear results of the various research projects. We compared them to the original hypotheses to determine whether each group had confirmed or disconfirmed it with their research. There were clear patterns, with some hypotheses receiving support across methodological approaches, whereas others received support with one methodological approach but not when tested with another. This was an excellent outcome to illustrate to the students that academic research is not always a “neat and tidy” process; on the contrary, it often is “messy” with many false starts and partial or even contradictory answers (see UGR SLO #3).

Despite occasional snags and delays in the process, each group produced a high-quality original UGR paper that used empirical evidence to test a clear social science hypothesis derived from political science literature. The following spring, one student from each group was accepted to present their paper at a roundtable panel at the 2015 KPSA conference (see UGR SLOs #1 and #2).

GRADING COLLECTIVE UGR EXPERIENCES

A difficult component of this project was how to fairly and meaningfully grade every student on their contribution to the group research projects. My goals were for them to (1) have an experience working together and receiving a collective evaluation as a group (to prepare for future post-graduation experiences likely to be encountered in the workforce; see UGR SLO #4); and (2) hold individual students accountable for their contribution (or lack thereof) to the group as a whole. In total, the project counted for 35% of the final course grade. The grading was weighted as follows:

  1. 1. My assessment of your group’s literature-review assignment (5%); theory, hypothesis, and research-design assignment (5%); and peer-review assignment (5%) (each team member receives the same grade).

  2. 2. My assessment of your group’s final paper and presentation (10%) (each team member receives the same grade).

  3. 3. Your assessment of your own contribution to the group’s efforts, assignments, and final paper and presentation (2.5%) (based on a scoring rubric).

  4. 4. Your group members’ individual confidential assessments of your contribution to the group’s efforts, assignments, and final paper and presentation (7.5%) (based on a scoring rubric).

For the self-evaluation and team-member evaluation, I used the rubric available at http://jfmueller.faculty.noctrl.edu/crow/altmangroupprocessrubric.pdf. Students completed this rubric for one another’s efforts; to ensure confidentiality, they were instructed to keep them in a sealed envelope and submit them to me privately.

In general, students seemed to appreciate that there would be a mechanism to ensure a consequence for students who did not “pull their weight” in the group; they reported that it kept themselves accountable as well. Students gave one another frank and honest assessments, equally willing to praise and criticize as warranted. I noted that the peer assessments matched fairly well with the self-assessments; however, on average, students rated themselves slightly higher than their peers rated them.

STUDENT ASSESSMENT OF THE UGR PROJECT

At the conclusion of the semester, I administered an anonymous evaluation instrument specifically designed to measure the effectiveness of the project in achieving various learning outcomes, including the institutional UGR SLOs. In general, student responses were positive: 88% either strongly or somewhat agreed that their “ability to write an academic research paper improved as a result of this experience.” All but one student reported that they either strongly or somewhat agreed that their “ability to problem-solve and overcome obstacles has improved.” Every student agreed that their “ability to effectively connect multiple ideas and approaches to bring new insights to research questions has improved.” Students were less optimistic about the extent to which the experience “helped me productively reflect on or make progress toward a personal, career, or post-graduation goal,” with only 59% either somewhat or strongly agreeing. Overall, however, every student agreed that “this was an effective and educational research experience” (approximately two thirds strongly agreed and one third somewhat agreed).

A common theme in the open-ended qualitative responses was that the group-based organization of the project enabled the students to effectively share tasks among group members. More than half wrote a variant of the following comment: “[A helpful aspect was that] being able to split the workload, compartmentalize, and become an expert in one area allowed for higher quality papers.” A few students appreciated that, although difficult, the group-based project provided insight to their future work environments after graduation. For instance: “[T]he entire group aspect was the most important factor in this whole experience. I think that finally, I’ve done something in class here that I can see immediate applications for in the world waiting for me outside the classroom.” Another student stated: “It is always good to practice working with other people since many careers require working with groups.”

In terms of challenges that the students faced, the single most common theme in the open-ended comments was that it was difficult for them to meet regularly to accomplish the group tasks, given that they were involved in extracurricular activities and also had homework for other courses. A representative comment was: “Since everyone is so busy, it was hard to find times to meet and to have everyone turn in their part in a timely fashion so that the whole group would edit it.” Another comment stated: “We had to meet A LOT. With busy schedules (for all of us) this can get tricky, but we made it work.” I am not persuaded that it is a problem when students must expend significant hours on a project outside of class. However, in the future, I will be more deliberate about providing guidance in terms of time management and group-work dynamics.

The final open-ended question asked for students’ suggestions on how to improve the project in the future. A key recommendation was that the groups conducting qualitative interviews could benefit from additional group members. The interview and transcription process was (arguably) more time consuming than the data-collection and analysis process for the quantitative groups. This is a fair critique that I intend to address in future courses.

CONCLUSION

To summarize, there is strong pedagogical evidence that undergraduate research is an effective way to introduce students to the craft of research as well as to provide opportunities to make progress toward a variety of desirable learning outcomes. Because mentoring UGR experiences can be a costly and time-consuming activity for faculty, however, I endorse the recommendation of the Council on Undergraduate Research. Instructors can find meaningful ways to provide more UGR while minimizing the time cost. This can be accomplished by replacing more traditional assignments with UGR experiences that take approximately the same amount of time to supervise and grade but that have proven to be more pedagogically effective than traditional alternatives.

ACKNOWLEDGMENT

I gratefully acknowledge the support of the John Marshall Harlan Research Fund at Centre College for providing financial support for this project.

Footnotes

1. It required significant time on my part to prepare the basic outline and timeline of the project before the first day of class. I recommend to other instructors that a similar project should be at least outlined and calendared before class begins.

2. Despite the strong undercurrents of party sorting and polarization at the national level in the last 50 years (Abramowitz Reference Abramowitz2012), Kentucky continues to regularly elect Republicans to federal offices and Democrats to state and local offices at rates much higher than in other states (Turner and Lasley Reference Turner, Lasley, Clinger and Hail2013).

3. I also assigned students to review the following websites, which offer helpful suggestions for working productively as a team: http://sydney.edu.au/stuserv/learning_centre/help/discussGrp/dg_goodGroup.shtml and http://isites.harvard.edu/fs/html/icb.topic58474/wigintro.html.

4. One snag we encountered was in Groups 4 and 5, which were assigned to perform the qualitative analysis on in-person interviews with Kentucky political elites and voters. As part of the project, I required them to complete and submit an Institutional Review Board (IRB) application because they were interviewing live human subjects. (Whereas at many institutions this would be a lengthy multi-month review and approval process, IRB applications at our small liberal arts college can be reviewed and approved within a few weeks.) I provided students with the IRB documentation and instructions and assumed that they could write a basic IRB application without supervision or guidance. However, this was more challenging than I had assumed because the students had little prior experience with the process. We ultimately submitted multiple revisions of the IRB application before obtaining approval; as a result, the students had less time to conduct and analyze their interviews. I recommend that other instructors consider how they might address this issue, given the nature of the IRB approval process at their particular institution.

References

REFERENCES

Abramowitz, Alan I. 2012. The Polarized Public? Why American Government Is So Dysfunctional. 1st edition. New York: Pearson.Google Scholar
Baglione, Lisa A. 2011. Writing a Research Paper in Political Science: A Practical Guide to Inquiry, Structure, and Methods. 2nd edition. Washington, DC: CQ Press.Google Scholar
Bauer, Karen W., and Bennett, Joan S.. 2003. “Alumni Perceptions Used to Assess Undergraduate Research Experience.” Journal of Higher Education 74 (2): 210–30.CrossRefGoogle Scholar
Campbell, Ashley, and Skoog, Gerald. 2004. “Preparing Undergraduate Women for Science Careers.” Journal of College Science Teaching 33 (5): 24–7.Google Scholar
Druckman, James N. 2015. “Research and Undergraduate Teaching: A False Divide? Symposium Introduction.” PS: Political Science and Politics 48 (1): 35–8.Google Scholar
Elman, Colin, Kapiszewski, Diana, and Kirilova, Dessislava. 2015. “Learning Through Research: Using Data to Train Undergraduates in Qualitative Methods.” PS: Political Science and Politics 48 (1): 3943.Google Scholar
Hakim, Toufic. 1998. “Soft Assessment of Undergraduate Research: Reactions and Student Perspectives.” Council on Undergraduate Research Quarterly 18 (4): 189–92.Google Scholar
Hu, Shouping, et al. 2008. “Reinventing Undergraduate Education: Engaging College Students in Research and Creative Activities.” ASHE Higher Education Report 33 (4): 1103.Google Scholar
Ishiyama, John. 2001. “Undergraduate Research and the Success of First-Generation, Low-Income College Students.” Council on Undergraduate Research Quarterly 22 (1): 3641.Google Scholar
Ishiyama, John. 2002. “Does Early Participation in Undergraduate Research Benefit Social Science and Humanities Students?” College Student Journal 36 (3): 381–7.Google Scholar
Jonides, John. 1995. “Evaluation and Dissemination of an Undergraduate Program to Improve Retention of At-Risk Students.” Available at http://files.eric.ed.gov/fulltext/ED414841.pdf.Google Scholar
Kuh, George D. 2008. High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter. Washington, DC: Association of American Colleges and Universities. Available at http://secure.aacu.org/store/detail.aspx?id=E-HIGHIMP.Google Scholar
Lei, Simon A., and Chuang, Ning-Kuang. 2009. “Undergraduate Research Assistantship: A Comparison of Benefits and Costs from Faculty and Students’ Perspectives.” Education 130 (2): 232–40.Google Scholar
Lopatto, David. 2004. “Survey of Undergraduate Research Experiences (SURE): First Findings.” Cell Biology Education 3 (4): 270–7.CrossRefGoogle Scholar
Mabrouk, Patricia Ann. 2009. “Survey Study Investigating the Significance of Conference Participation to Undergraduate Research Students.” Journal of Chemical Education 86 (11): 1335–40.Google Scholar
Pascarella, Ernest T., and Terenzini, Patrick T.. 2005. How College Affects Students: A Third Decade of Research. San Francisco: Jossey-Bass.Google Scholar
Russell, Susan H., Hancock, Mary P., and McCullough, James. 2007. “Benefits of Undergraduate Research Experiences.” Science (Washington) 316 (5824): 548–9.Google Scholar
Seymore, Elaine, Hunter, Anne-Barrie, Laursen, Sandra L., and DeAntoni, Tracee. 2004. “Establishing the Benefits of Research Experiences for Undergraduates in the Sciences: First Findings from a Three-Year Study.” Science Education 88 (4): 493534.Google Scholar
Shanahan, Jenny Olin. 2012. “Curricular Support for Faculty Who Engage in Undergraduate Research.” In Faculty Support and Undergraduate Research: Innovations in Faculty Role Definition, Workload, and Reward, Hensel, Nancy H. and Paul, Elizabeth L., editors, 6876. Washington, DC: Council on Undergraduate Research.Google Scholar
Summers, Michael F., and Hrabowski, Freeman A. III. 2006. “Preparing Minority Scientists and Engineers.” Science 31 (1): 1870–1.Google Scholar
Turner, Joel, and Lasley, Scott. 2013. “Political Parties and Elections in Kentucky.” In Kentucky Government, Politics, and Public Policy, Clinger, James and Hail, Michael, editors, 163–83. Lexington: University of Kentucky Press.Google Scholar