Hostname: page-component-7c8c6479df-24hb2 Total loading time: 0 Render date: 2024-03-27T15:07:45.379Z Has data issue: false hasContentIssue false

How Social Science Research Can Improve Teaching

Published online by Cambridge University Press:  21 June 2013

Gary King
Affiliation:
Harvard University
Maya Sen
Affiliation:
University of Rochester
Rights & Permissions [Opens in a new window]

Abstract

We marshal discoveries about human behavior and learning from social science research and show how these can be used to improve teaching and learning. The discoveries are easily stated as three social science generalizations: (1) social connections motivate, (2) teaching teaches the teacher, and (3) instant feedback improves learning. We show how to apply these generalizations via innovations in modern information technology inside, outside, and across university classrooms. We also give concrete examples of these ideas from innovations we have experimented with in our own teaching.

Type
The Teacher
Copyright
Copyright © American Political Science Association 2013 

Humans have theorized about how to teach for thousands of years and update the substance of what we teach almost every year. Yet generations have passed without any major improvements in the procedures and style of teaching in our classrooms. If your great-great-great-grandparents went to college, they probably sat in a classroom with all the other students facing forward, trying to look attentive, while the professor professed. If you are a professor at a university today, you probably lecture to the same sea of students, all still trying to look like they are paying attention. To be sure, you may use some newer technologies (such as electricity, radio, TV, whiteboards, and PowerPoint slides), you may have added a few group activities, and you perhaps teach a seminar with lots of discussion. But if your ancestors were to walk into a classroom today, they would know where to sit, what to do, and how to act. Our methods of teaching have changed very little.

Education researchers, often in and around schools of education, have written volumes about how to improve teaching and learning. Many of these ideas are extremely promising but too few are firmly established by rigorous empirical research, replicated in different areas (Whitehurst Reference Whitehurst2010). The problem is not the researchers; the problem is the almost unique (and probably underappreciated) difficulty of doing research in this area. Methodologically, we have a large number of students, but the unit of analysis for a teaching intervention is the professor or class. Thus, any one professor intervening in his or her own classroom has an n = 1 study. Although intervening in your own classroom is easy, getting a reasonable sample size with the right unit of analysis is almost impossible and rarely done: imagine the difficulty of explaining to (say) 50 of your colleagues that they and their classes will be assigned (randomly or otherwise) to treatment and control groups to test a hypothesis. For one example, in 75 years of education research on the effects of a variable as important as class size, only one fully randomized large-scale study has ever been done (Chingos Reference Chingos2013)!

We need to look for more opportunities for this type of research within education.Footnote 1 But even without it, substantial progress is now possible thanks to research from other fields—fields where large-scale randomized trials, and many other types of rigorous research designs, are possible. Recent developments in social science research mean we know more about how human beings think and learn, which, we show, can be marshaled to improve our teaching. In addition, technology has progressed far past that used in most current classrooms. Although no technology used by itself has any necessary effect on learning (and new teaching technology can often be a distraction), some new technologies make it easier to take advantage of social science insights to improve teaching. Finally, unprecedented societal forces outside the university—including for-profit universities, massive open online courses, commercial and not-for-profit ventures, and the web—are now conspiring to overturn centuries of stable funding models (King and Sen Reference King and Sen2013). In every other area of society, one either adjusts to forces like these or gets run over. It is time for those inside universities to pay attention and to use their unique advantage—research—to improve teaching.

In this article, we discuss how social science knowledge and technological innovations can help us teach better. We do this by distilling three principles from social science research: (1) social connections motivate, (2) teaching teaches the teacher, and (3) instant feedback improves learning. We find evidence for these principles in research from social and cognitive psychology, public health, economics, sociology, and political science. To show how these principles can be used in teaching, we draw from our experience developing and using several interrelated technologies in teaching our class, Harvard's Gov2001: Advanced Quantitative Political Methodology (see http://j.mp/G2001). We illustrate these outside, inside, and across classrooms in Section 2.

We conclude in Section 3 by discussing the growing movement in natural and physical science departments to devote some of their own faculty positions and other resources to education research. That scientists are now participating in what is essentially social science research is gratifying, but social scientists have the same needs, our own teaching issues, and more knowledge of the area that we can bring to bear on common problems. It is time we make a contribution, both in terms of research, which we take a step toward in this article, and resources.

1. SOCIAL SCIENCE LEARNING PRINCIPLES

Thanks to advances in the methods of causal inference, huge increases in data collection, and improved theoretical understandings, we have a better understanding of how and why people learn and behave than ever before. From this massive literature, we extract three principles that can be applied to improve teaching.

Principle 1: Social Connections Motivate.

Coaxing individuals to take actions that benefit themselves—such as losing weight, exercising, and not smoking—is often extremely difficult. But getting them to take actions that involve social interaction or that benefit the community—such as recycling or joining the PTA—is often far easier. Social scientists have learned how to use this insight by making individual activities into social activities, thereby increasing the effectiveness of individual-level interventions. For example, the large “get out the vote” literature shows a tiny effect of all types of individual citizen contacts, such as phone calls, in-person visits, or mailings. But studies that add a social component—such as explaining to a respondent which of a person's neighbors have already voted—can increase a person's propensity to vote by as much as 8 percentage points (Gerber, Green, and Larimer Reference Gerber, Green and Larimer2008).

The same insight applies more widely: we tend to lose and gain weight when our friends do (Christakis and Fowler Reference Christakis and Fowler2007; VanderWeele Reference VanderWeele2011). We drink less, exercise more, and smoke less when our friends and associates do (Christakis and Fowler Reference Christakis and Fowler2008; Rosenquist et al. Reference Rosenquist, Murabito, Fowler and Christakis2010). Social networks influence what we eat (Pachucki, Jacques, and Christakis Reference Pachucki, Jacques and Christakis2011), how happy we are (Fowler and Christakis Reference Fowler and Christakis2008), the probability we end up lonely or depressed (Cacioppo, Fowler, and Christakis Reference Cacioppo, Fowler and Christakis2009), where we live (DiPrete et al. Reference DiPrete, Gelman, McCormick, Teitler and Zheng2011), the kind of health habits we take up (Centola Reference Centola2010), and whether our marriages persist or end (McDermott, Fowler, and Christakis Reference McDermott, Fowler and Christakis2009). Social connections motivate recycling (Burn Reference Burn1991), influence the importance of attending religious services (Lim and Putnam Reference Lim and Putnam2010), and affect many other behaviors and attitudes.

Social connections affect so many aspects of our lives that our argument that they can also be applied to education and learning should be no surprise. It is not only for efficiency that a group of students are all taught together in the same classroom or that elementary schools spend so much time trying to integrate students socially into the class environment. Some research in education provides evidence for this point directly in the context of traditional higher education (Garrison, Anderson, and Archer Reference Garrison, Anderson and Archer1999; Summers and Svinicki Reference Summers and Svinicki2007) and online education (Barab and Duffy Reference Barab and Duffy2000; Dawson Reference Dawson2006; DeSchryver et al. Reference DeSchryver, Mishra, Koehler and Francis2009; Graff Reference Graff2003; Rovai Reference Rovai2003; Shea Reference Shea2006), where community building has been shown to be of particular importance due to relatively infrequent social interactions.

Of course, social connections can also distract students, detract from a common purpose, and derail lectures. Finding ways of using this powerful tool in a productive way is crucial. We discuss specific implementations in Section 2.

Principle 2: Teaching Teaches the Teacher.

Social psychologists have demonstrated that under normal circumstances, we “mind wander” (i.e., think about subjects other than those in which we are nominally participating) almost half of all our waking hours (Killingsworth and Gilbert Reference Killingsworth and Gilbert2010; Morse Reference Morse2012). Although the literature does not include measures of mind wandering while watching university lectures, it is doubtful that the rate is any lower. People also tend to be less happy when mind wandering, which cannot possibly help students learn, to say nothing about teaching evaluations.

So how do we get students to pay more attention? One strategy is to use the fact that social interactions eliminate about half of this effect: when engaged in conversation with others, people's minds wander only about a quarter of the time (Morse Reference Morse2012). If we can turn the students into teachers—arranging for them to explain what they have learned to others, having them ask questions, debate, persuade, and otherwise engage the subject matter socially—we can capture a great deal more of their attention than would otherwise be possible.

Almost anyone who has taught understands this fact: study a subject yourself and you can learn a great deal. But teach that same subject to someone else and you understand it far better than you ever realized. The person you are teaching will also learn, even if not as much as you learned. That “teaching teaches the teacher” has been demonstrated empirically in many studies (Chi et al. Reference Chi, DeLeeuw, Chiu and Lavancher1994; VanLehn et al. Reference VanLehn, Graesser, Jackson, Jordan, Olney and Rosé2007). We believe it is explained, in part, by the difficulty of mind wandering instead of being engaged socially, by being forced to organize thoughts in a more productive way, and by active rather than passive engagement. We give some examples on how to harness this principle.

Principle 3: Instant Feedback Improves Learning.

Suppose you are an athlete practicing to make the Olympic diving team and you arrive for practice on a Saturday. How much would you improve if the coach watched you silently all day and then gave you a summary of how you did after practice was over? You would learn some, but you would learn a lot more if, as is typical, you received detailed feedback immediately after every dive.

It is the same story with university education: economics, psychological, medical, and educational research demonstrates convincingly that immediate and frequent feedback improves learning (Dihoff, Brosvic and Epstein Reference Dihoff, Brosvic and Epstein2003, Reference Dihoff, Brosvic and Epstein2004; Dubner and Levitt Reference Dubner and Levitt2006; Hattie and Timperley Reference Hattie and Timperley2007; Hodder et al. Reference Hodder, Rivington, Calcutt and Hart1989). The more chances you have to try and fail, the quicker you will master the skill. Implementing this advice involves frequent evaluation: like in science in general, students learn more when they have the chance to be proven wrong. This involves eliminating waiting periods before questions can be answered, understanding the limits of their knowledge, and encouraging students to ask questions as soon as they hit a stumbling block. Requiring them to wait until office hours, section, or the next class should not be part of the drill.

2. IMPLEMENTING THE PRINCIPLES

We now give some ways of combining the social science principles outlined in Section 1 with innovations in information technology. We do so outside (Section 2.1), inside (Section 2.2), and across (Section 2.3) classrooms. The technologies we describe are those we have developed or tried ourselves, but they represent only a few of the possible applications of the principles.

2.1. Outside the Classroom

Here we give examples of three innovations, each of which takes advantage of the principles described earlier. In all cases, we seek to make the class and its social connections continue throughout the week until the next classroom experience.

Making Lectures into Interactive Homework.

Putting a university lecture together incurs significant start-up costs for instructors: getting the material together, writing slides, preparing the final presentation, and more. The good news is that after the lecture is written, the marginal costs associated with repeated presentations are low. The bad news is that small yearly improvements result in the same lectures being presented over and over again, or the lectures improving but learning not so much. This situation, combined with the fact that lectures today are often videotaped, disincentivizes students to come to class, pay attention, and learn.

As an alternative, we assign portions of the lecture videos as homework, using an open-source collaborative video technology that we helped develop. This system, which was created by the Harvard University Academic Technology Department, has been released open-source and can now be used by instructors around the world.Footnote 2 (Commercial analogues exist as well—e.g., echo360 and others.) This video Collaborative Annotation Tool (CAT) has at least three benefits. First, with CAT, students can hit “rewind” as often as they like. Because social connections motivate, students rarely stop the professor to ask questions in class, even when it would be beneficial: students do not want to be seen by their peers as not paying attention, not understanding the material, or disrespecting the professor or other students, and so they sit quietly, trying to appear attentive. Because so much time is spent mind wandering, a live lecture can sometimes be described as little more than a sequence of missed opportunities. Collaborative annotation outside of class can help change this.

Second, if rewind does not help, a student can stop the CAT playback and annotate the timeline of the video or one of the associated slides (that turn in sync with the video) with a question or comment. That is, the students can literally pause the video to write in clarifying or substantive questions that correspond with what the instructor is saying at that exact moment (see figure 1). Other students, motivated by their social connections, then help clarify, as can the teaching staff. In our experience, a lively, Facebook-style discussion about the material then often develops (often during those late-night hours that federal regulations require faculty to be sleeping). Because, in our experience, students are highly motivated to provide feedback to their peers in near real time, teaching in this way teaches the students who serve as teachers and learning is greatly enhanced.

Figure 1 Collaborative Video Annotation

The lecture video on the left, slides on the top right, and discussion forum at the bottom right can all be resized. The solid line illustrates, for the purpose of this article, the connection between a point on the timeline and the comments. (Color online)

Finally, collaborative video annotation can improve the classroom experience. First, it enables the instructor to take and encourage questions even at the cost of not getting through the planned material; collaborative video annotation makes it easy to assign the portions of the lecture for which there was no time during class. Second, students come to class much more prepared. Of course, we also assign written material for students to consume, but social science evidence indicates that seeing the same material via different modalities enhances learning (Mayer Reference Mayer2003). (Practicing what we teach, we have posted a video explaining many of the points in this article at j.mp/HUteach.)

Making Reading Interactive.

Suppose you are assigned a chapter to read and you cannot understand one of the key points on the second page. In a traditional class, you would be expected to meet with a teaching assistant at office hours or in the scheduled section meeting. In either case, that could be days from now; if you wait, you will lose sight of what you were reading and probably will not have time to complete the assignment. Alternatively, you could skip this key point, pretend you understand it, make some confused assumption about it, and keep reading. Either option violates all three social science teaching principles.

Instead, our practice is to follow analogous techniques for reading assignments as we do with videos. To do this, we put all class readings in a collaborative text annotation system. We used NB,Footnote 3 an online annotation system that was created by faculty and graduate students at the Computer Science and Artificial Intelligence Laboratory at MIT. (A modified screen shot of NB is in figure 2.) NB and other such systems enable students to highlight passages in the text they do not understand and ask questions in a separate text field. Other students, or the professor or teaching assistants, can then see the questions posted and instantly respond—again 24/7. On any given night, even if class is not scheduled to meet for another week, students are able to receive fast feedback on their questions (in a Facebook-style discussion forum). Because teaching teaches the teacher, considerable pedagogical benefits go not only to those who get their questions answered but also to those answering questions. And because social connections motivate, students give more time and attention to the readings and the class than they would otherwise.

Figure 2 Collaborative Text Annotation

A screen shot of NB software, with a solid line added to highlight the connection between the assigned text (a) and discussion forum (b). (Color online)

E-mail Lists to Create Community.

Many university courses today have a “class e-mail list” that instructors use to disseminate logistical information to students. (Our e-mail list was created by the Institute for Quantitative Social Science at Harvard for this class, but instructors can quickly create e-mail or discussion lists in many other ways, for example via Google Groups or services such as Piazza.com.) We go further and encourage students to ask questions of the entire e-mail list, instead of just the teaching staff, and ask those students who know the answer to make a contribution by responding. This speeds feedback, helps some students get the benefit of being teachers, and motivates them with social connections. Also, we eliminate any reason to wait until “business hours” to contact, or receive a response from, the teaching staff. To enhance social connections, students can include noncourse related information when it can help build camaraderie in the class; this may even include job opportunities, relevant papers, conferences that might be of interest, and class social events.

In addition, we recently discovered that the class's e-mail list was available going back for over a decade. We then turned this information into a searchable knowledge base, as well as a community in its own right, by making the e-mail lists searchable. Students now have access to more than 10,000 class e-mails covering many topics and providing instantaneous answers to hundreds of key questions. (Figure 3 displays a screen shot of a sample search (a) and a sample answer (b).) In addition, we obtained permission from Harvard's General Counsel to make available not only the questions and answers in the archive, but also the author of each e-mail. Thus, not only does the archive provide instant feedback, but it allows students a glimpse into a remarkable network of students who have taken this class in previous years. A tremendously motivating feature of this innovation is finding a question similar to yours by a student who now happens to be a tenured professor at a major university, partner in a law firm, or leader of a major corporation.

Figure 3 Querying the Class E-mail

Class e-mail archives (a) and getting an answer (b). (Color online)

We also encourage the time-spanning nature of the class community by building and regularly posting to a Facebook group exclusive to class alumni. We use this Facebook group to communicate job opportunities, data problems, methodological advice, and other information.

2.2. Inside the Classroom

So if all this activity is going on outside the classroom, what is the point of going to class? The innovations in Section 2.1, if used properly and along with other innovations, can improve the classroom experience itself and greatly increase the amount of information learned overall.

Understanding Confusions.

Using the innovations we introduce outside of classroom, instructors can learn exactly what students have the most trouble with and use that information to make the classroom experience far more powerful. Thus, for video and text annotation, and for students querying our e-mail database, we collect ongoing data about the topics students discuss, the kinds of questions they ask, and how they answer others' questions. Before each class, we automatically construct and study heat maps of the readings and assigned lecture timelines, colored by the intensity of annotations. By additionally soliciting students' feedback, we can piece together before walking into class (1) what students think or say they are confused about and (2) what students are actually confused about, judged by direct evaluations.

How exactly do we use this knowledge to increase what students learn? We do this in two ways.

Informed Lecturers.

First, the social science learning principles also apply to us as teachers. The instant feedback provided to us on what students are having trouble with, provided by these technological innovations, ought to substantially improve teaching compared to end-of-semester student evaluations or even midterms or final exams. We use this information to focus more time on material we now know students have stumbled over or find confusing. If students have seen a video presentation from a lecture in a previous year, we develop a new way to approach the material for the current year. We also stop and prompt students for questions during parts of the lecture about which we now know they will have difficulty with, or we can ask questions ourselves to generate discussion.

Computer-Assisted Peer Instruction.

Second, because students have learned far more outside of class than is typical, and because our lectures are more effectively directed to what they do not understand, we spend less time presenting traditional lectures. This is a substantial benefit because—although lectures may generate a kind of “collective effervescence” that people resonate with, much like they do with concerts, sporting events, or religious rituals (Konvalinka et al. Reference Konvalinka, Xygalatas, Bulbulia, Schjødt, Jegindø, Wallot, Van Orden and Roepstorff2011) and that possibly increases cooperation (Wiltermuth and Heath Reference Wiltermuth and Heath2009) and further engagement—lectures also include minimal feedback for the instructor, minimal feedback for the students, minimal social connections among the students, and little opportunity for students to learn by teaching.

Thus, we spend a portion of the class via a version of “computer-assisted peer instruction” (CAPI). Peer instruction was introduced by Eric Mazur (Reference Mazur1997; see also Crouch and Mazur Reference Crouch and Mazur2001, and Fagen, Crouch, and Mazur Reference Fagen, Crouch and Mazur2002) and has seen widespread use (it is related to another similar protocol called “team-based learning”; see Sweet and Michaelsen Reference Sweet and Michaelsen2012).

First, we use an automated system we helped develop called “Learning Catalytics” that implements CAPI and that students sign into when they come to class.Footnote 4 (Instead of prohibiting smartphones in class, we require them or, alternatively, a laptop, tablet, or some other web-enabled device.) We then deliver to their device (and, optionally, also to the screen in the front of the room) a difficult conceptual question. We then give the students a minute or two without discussion to reflect on the question and to indicate their answer on their device. The question can be one of many types—multiple choice, a freehand drawing, a mathematical expression, a directional vector, unstructured text, a map highlight, a drawing, or others. We construct the question out of the most difficult parts of the week's assignments, so that, ideally, only about 20% of students initially get the answer correct.

Next, Learning Catalytics automatically puts students into groups of two to five in preparation for a discussion about the question(s). We use an automated and analytical approach to select students into groups so that the conversation will be maximally productive. This system is continually updated, but for predictors we begin with data collected to characterize each student at the start of the semester and add each student's initial answer to the question just asked, their answers to all previous CAPI questions and answers, their experience in the system, and how productive previous CAPI discussions they participated in were. Finally, data from thousands of other similar students in hundreds of other classrooms taking similar courses can be used as well.

Once grouped, the system delivers to each student's device instructions regarding which other students to talk with and (optionally) where to move their seat to have the discussion. (Most instructors spend time and effort trying to convince students to fill in seats up front; as an alternative, we can let students sit where they like when they walk in, but on the first CAPI question we automatically assign each student a seat where we want them to sit. We then avoid transaction costs for the remaining CAPI questions and choose groups that do not require students to move.)

Next, we ask the students to try to persuade the other members of their group of the veracity of their answers. Because social connections motivate, we often get highly animated discussions. (Over the course of the semester, we use different groupings so students get to know more students than just the friends they came in with.) We allow the ensuing discussion to continue for approximately two to seven minutes, permitting the time to vary according to the complexity of the question. During this time, the teaching staff move among the groups as participants or just listening in and learning about the students' misunderstandings and difficulties. Because teaching teaches the teacher, having the students try to persuade their classmates substantially improves their understanding of the subject matter. This is even true for those who got the right answer the first time.

We then deliver the same question to each student's device again and have them answer it. A minute or two later we project on the screen in front of the classroom (and to the students' devices) a summary of the answers before and after discussion, which gives the students immediate feedback. For multiple-choice questions, we use overlapping histograms. For freehand drawings, we superimpose all the drawings on top of one another (using alpha-transparency). For equations, we automatically check for algebraically equivalent versions. For free text, we cluster responses. When it works best—which, like in survey research, is primarily a function of us asking sufficiently clear questions—the proportion of correct student answers increases from 20% to more than 80%.

Figure 4 gives an example of a multiple-choice question delivered to a student's phone (a) and the instructor's view (c). After the first round, a personal message is delivered to each student's phone or other device that tells them who to discuss their question with (see note in phone (a)). A seating chart appears at the top right for the instructor, coded with letters for each answer and green for correct (b); the grouping is also shown. The instructor can also see, and optionally can show to the class, histograms of student answers before and after discussion (panel c). Finally, students are given the option of indicating whether they “get it now” or “still don't get it” (see current tally at bottom right of panel (c)) to provide instant feedback to the instructor.

Figure 4 Multiple Choice Question Delivered to a Student's Smartphone

Sample question as seen on student smartphone (a), example class discussion groupings (b), and histograms for the instructor (and optionally to share with students) showing student responses before and after discussion (c). (Color online)

We intersperse CAPI questions at different points in the lecture. To ensure that students are in a participatory mood during the lecture, we usually begin class with a CAPI question. We also use other questions at the most difficult points in the lecture. Indeed, many who use CAPI do not lecture at all, thus completely “flipping the classroom” as the practice is sometimes called, but with computer assistance. Students are told that answers to these questions do not count to their grade (except for their participation).

The last time we taught with this technology, we asked 14 questions with quantifiably correct answers. Among these, the median increase in the percent correct was a substantial 31 percentage points. We also collected data from a sample of courses using Learning Catalytics across disciplines from other instructors, for classes in which we had no part and at colleges and universities all over the country. These data include 275 distinct questions asked in classes that were part of 19 separate courses, in four different disciplines (statistics, physics, math, and biology). Each of these questions was asked of a classroom of students, followed by an application of our automated grouping technology, peer instruction, and a repeat of the same question. Figure 5 summarizes these results with the median percentage point increase (black dot) and the 10th and 90th percentile values (vertical lines at the ends of each horizontal bar). As is evident from figure 5, the learning from CAPI, without any instructor intervention, is considerable—similar to the results we found in our class. (Furthermore, in our experience, students who learn in this way during CAPI retain the information for far longer than for any other teaching method we have tried.)

Figure 5 Median Percentage Point Increase

Percentage point increase in the percent of students giving the correct answer from before to after computer-assisted peer instruction. Medians and 10th and 90th percentiles averaged by discipline.

CAPI can also be used for subjects with no definitive answer, such as in philosophy, which encourages students to hone their arguments and debate skills; the difference is that we do not necessarily expect a particular directional change in the percent giving each answer, but it seems to do a good job at helping students tune their debating skills and to more deeply understand the issues at hand.

2.3. Across classrooms

During the last few decades, social scientists have been highly successful in convincing the world outside of academia about the value of large-scale observational and experimental data collection and analysis. After all, this type of quantitative social science has already remade existing companies and established new industries; led to a huge increase in the expressive capacity of human beings; and had a role in reinventing medicine, friendship networks, political campaigns, public health, legal analysis, policing, economics, sports, public policy, and program evaluation, among many other areas. In recent years, what is effectively quantitative social science is much of what is now known to the general public as “big data.” There is no reason why those of us in social science departments who are responsible for creating, applying, and popularizing the innovations that made these changes possible should not also turn this productive machinery to improve our teaching.

Unfortunately, using data collection to improve teaching and learning beyond a single classroom is rare, at least aside from end-of-semester student ratings. Although we have greatly increased the amount of data collected about the classroom we control, many of us need to work together with university officials to implement big data strategies for education. With appropriate protections for individual student privacy and federal regulations, we should do what most businesses do now and instrument as many aspects of the university as possible. The results may be substantial. For example, instead of students receiving ad hoc, idiosyncratic advice from a few other students they happen to know regarding what to major in, what classes to take, and what careers to pursue, good data collection and analytics can give students systematic advice from tens of thousands of previous students they would never have time to know. Students can study many more paths through a college education and see which suit them, understand what hurdles stand in their way, what roadblocks they should avoid, and which choices will confront them. We can use the instructional staff more efficiently, and help us learn from each other what works, what does not, and what only works well for some instructors. Instead of instructors experimenting by changing what they do in their classroom and never evaluating it because of the absence of a proper control group, they can learn by observational studies—if we make the effort to collect the data and apply the methods we do for research to our teaching as well.

University officials, faculty committees, and staff need to take on board the overwhelming impact social science research has on every area it touches and how it can revolutionize university operations to improve teaching and learning as well.

CONCLUDING REMARKS

In recent years, rigorous education-related research has taken root within physics (Mazur Reference Mazur1997; Deslauriers, Schelew, and Wieman Reference Deslauriers, Schelew and Wieman2011), chemistry (Golde, McCreary, and Koeske Reference Golde, McCreary and Koeske2006), computer science (Porter et al. Reference Porter, Lee, Simon and Zingaro2011), medicine and nursing (Ende Reference Ende1983; Hodder et al. Reference Hodder, Rivington, Calcutt and Hart1989; Rao and DiCarlo Reference Rao and DiCarlo2000), and other areas. In addition, numerous science departments now have dedicated research groups, faculty lines, postdocs, and other staff who specialize in education research adapted to their disciplinary areas—for example, the physics education groups at the University of Arizona, the University of Colorado, Boulder, Harvard University, Kansas State University, the University of Maryland, the Ohio State University, and others; the chemistry education groups at Iowa State University, Purdue University, and Cambridge University; the computer science education groups at Bowdoin College, Duke University, and Villanova University, and the medical education research and evaluation group at Stanford University, among others.

These groups are studying an aspect of human behavior—that is, social science research. It is gratifying to see another area where we have had an influence, but social scientists are, of course, especially well situated to make major contributions to these emerging literatures. We should accept the challenge and encourage our colleagues to join in, systematize social science knowledge, harvest useful social science generalizations for teaching, develop new technologies and innovations that improve our teaching and our students' learning, and contribute our valuable faculty lines. The result likely will be that classrooms will be filled with better educated and knowledgeable students, albeit trying very hard to look attentive.

ACKNOWLEDGMENTS AND DISCLAIMER

For advice, suggestions, comments, and help with various technologies, we thank Ferdi Alimadhi, Tim Brenner, Matthew Chingos, Merce Crosas, Phil Desenne, Leslie Finger, Dave Kane, David Karger, Konstantin Kashin, Brian Lukoff, EJ Marmonti, Eric Mazur, Jen Pan, Molly Roberts, Julie Schell, Brandon Stewart, Dustin Tingley, and Sacha Zyto. Gary King is cofounder of Learning Catalytics (recently acquired by Pearson).

Footnotes

1 About 80% of children with some types of cancer take part in randomized experiments that will help only the next child diagnosed. As hard as it would be, we ought to be able to find a way to convince faculty to participate in teaching experiments too.

2 See http://harvard-atg.github.com/Catool. A modified screen shot of this video annotation tool is in figure 1.

References

REFERENCES

Barab, Sasha A., and Duffy, Thomas M.. 2000. “From Practice Fields to Communities of Practice.” Theoretical Foundations of Learning Environments 1: 2555.Google Scholar
Burn, Shawn M. 1991. “Social Psychology and the Stimulation of Recycling Behaviors: The Block Leader Approach.” Journal of Applied Social Psychology 21 (8): 611–29.CrossRefGoogle Scholar
Cacioppo, John T., Fowler, James H., and Christakis, Nicholas A.. 2009. “Alone in the Crowd: The Structure and Spread of Loneliness in a Large Social Network.” Journal of Personality and Social Psychology 97 (6): 977.CrossRefGoogle Scholar
Centola, Damon. 2010. “The Spread of Behavior in an Online Social Network Experiment.” Science 329 (5996): 1194–197.CrossRefGoogle Scholar
Chi, Michelene T., DeLeeuw, Nicolas, Chiu, Mei-Hung, and Lavancher, Christian. 1994. “Eliciting Self-Explanations Improves Understanding.” Cognitive Science 18 (3): 439–77.Google Scholar
Chingos, Matthew M. 2013. “Class Size and Student Outcomes: Research and Policy Implications.” Journal of Policy Analysis and Management 32 (2): 911–38.CrossRefGoogle Scholar
Christakis, Nicholas A., and Fowler, James H.. 2007. “The Spread of Obesity in a Large Social Network over 32 years.” New England Journal of Medicine 357 (4): 370–79.CrossRefGoogle Scholar
Christakis, Nicholas A., and Fowler, James H.. 2008. “The Collective Dynamics of Smoking in a Large Social Network.” New England Journal of Medicine 358 (21): 2249–258.CrossRefGoogle Scholar
Crouch, Catherine H., and Mazur, Eric. 2001. “Peer Instruction: Ten Years of Experience and Results.” American Journal of Physics 69: 970.CrossRefGoogle Scholar
Dawson, Shane. 2006. “A Study of the Relationship between Student Communication Interaction and Sense of Community.” The Internet and Higher Education 9 (3): 153–62.CrossRefGoogle Scholar
DeSchryver, M., Mishra, P., Koehler, M., and Francis, A. P.. 2009. Moodle vs. Facebook: Does Using Facebook for Discussions in an Online Course Enhance Perceived Social Presence and Student Interaction? In Proceedings of Society for Information Technology and Teacher Education International Conference, Chesapeake, VA, pp. 329–336.Google Scholar
Deslauriers, Louis, Schelew, Ellen, and Wieman, Carl. 2011. “Improved Learning in a Large-Enrollment Physics Class.” Science 332 (6031): 862.CrossRefGoogle Scholar
Dihoff, Roberta E., Brosvic, Gary M., and Epstein, Michael L.. 2003. “The Role of Feedback during Academic Testing: The Delay Retention Effect Revisited.” Psychological Record 53 (4): 533–48.CrossRefGoogle Scholar
Dihoff, Roberta E., Brosvic, Gary M., and Epstein, Michael L.. 2004. “Provision of Feedback during Preparation for Academic Testing: Learning Is Enhanced by Immediate but not Delayed Feedback.” Psychological Record 54 (2): 207–32.CrossRefGoogle Scholar
DiPrete, Thomas A., Gelman, Andrew, McCormick, Tyler, Teitler, Julien, and Zheng, Tian. 2011. “Segregation in Social Networks Based on Acquaintanceship and Trust.” American Journal of Sociology 116 (4): 1234–83.CrossRefGoogle ScholarPubMed
Dubner, Stephen J., and Levitt, Steven D.. 2006. “Freakonomics: A Star Is Made.” New York Times Magazine, May 7.Google Scholar
Ende, Jack. 1983. “Feedback in Clinical Medical Education.” The Journal of the American Medical Association 250 (6): 777.CrossRefGoogle ScholarPubMed
Fagen, Adam P., Crouch, Catherine H., and Mazur, Eric. 2002. “Peer Instruction: Results from a Range of Classrooms.” Physics Teacher 40 (4): 206–09.CrossRefGoogle Scholar
Fowler, James H., and Christakis, Nicholas A.. 2008. “The Dynamic Spread of Happiness in a Large Social Network.” BMJ: British Medical Journal 337: a2338.CrossRefGoogle Scholar
Garrison, D. Randy, Anderson, Terry, and Archer, Walter. 1999. “Critical Inquiry in a Text-Based Environment: Computer Conferencing in Higher Education.” The Internet and Higher Education 2 (2-3): 87105.CrossRefGoogle Scholar
Gerber, Alan S., Green, Donald P., and Larimer, Christopher W.. 2008. “Social Pressure and Voter Turnout: Evidence from a Large-Scale Field Experiment.” American Political Science Review 102 (01): 3348.CrossRefGoogle Scholar
Golde, Michael F., McCreary, Christine L., and Koeske, Randi. 2006. “Peer Instruction in the General Chemistry Laboratory: Assessment of Student Learning.” Journal of Chemical Education 83 (5): 804.CrossRefGoogle Scholar
Graff, Martin. 2003. “Individual Differences in Sense of Classroom Community in a Blended Learning Environment.” Journal of Educational Media 28 (2-3): 203–10.CrossRefGoogle Scholar
Hattie, John, and Timperley, Helen. 2007. “The Power of Feedback.” Review of Educational Research 77 (1): 81.CrossRefGoogle Scholar
Hodder, R. V., Rivington, R. N., Calcutt, L. E., and Hart, I. R.. 1989. “The Effectiveness of Immediate Feedback during the Objective Structured Clinical Examination.” Medical Education 23 (2): 184–88.CrossRefGoogle ScholarPubMed
Killingsworth, Matthew A., and Gilbert, Daniel T.. 2010. “A Wandering Mind Is an Unhappy Mind.” Science 330 (6006): 932–32.CrossRefGoogle ScholarPubMed
King, Gary, and Sen, Maya. 2013. “The Troubled Future of Colleges and Universities.” PS: Political Science and Politics 46 (1): 8189.Google Scholar
Konvalinka, I., Xygalatas, D., Bulbulia, J., Schjødt, U., Jegindø, E. M., Wallot, S., Van Orden, G., and Roepstorff, A.. 2011. “Synchronized Arousal between Performers and Related Spectators in a Fire-Walking Ritual.” Proceedings of the National Academy of Sciences 108 (20): 8514.CrossRefGoogle Scholar
Lim, Chaeyoon, and Putnam, Robert D.. 2010. “Religion, Social Networks, and Life Satisfaction.” American Sociological Review 75 (6): 914–33.CrossRefGoogle Scholar
Mayer, Richard E. 2003. “The Promise of Multimedia Learning: Using the Same Instructional Design Methods across Different Media.” Learning and Instruction 13 (2): 125–39.CrossRefGoogle Scholar
Mazur, Eric. 1997. Peer Instruction: A User's Manual. Upper Saddle River, NJ: Prentice Hall.Google Scholar
McDermott, Rose, Fowler, James H., and Christakis, Nicholas A.. 2009. “Breaking Up Is Hard to Do, Unless Everyone Else Is Doing It Too: Social Network Effects on Divorce in a Longitudinal Sample Followed for 32 Years.” Available at SSRN: http://ssrn.com/abstract=1490708 or http://dx.doi.org/10.2139/ssrn.1490708.CrossRefGoogle Scholar
Morse, Gardiner. 2012. “The Science behind the Smile: An Interview with Daniel Gilbert by Gardiner Morse.” Harvard Business Review, January-February: 8490.Google Scholar
Pachucki, Mark A., Jacques, Paul F., and Christakis, Nicholas A.. 2011. “Social Network Concordance in Food Choice among Spouses, Friends, and Siblings.” American Journal of Public Health 101 (11): 2170.CrossRefGoogle ScholarPubMed
Porter, Leo, Lee, Cynthia Bailey, Simon, Beth, and Zingaro, Daniel. 2011. Peer Instruction: Do Students Really Learn from Peer Discussion in Computing? In Proceedings of the Seventh International Workshop on Computing Education Research, 45–52. ACM.CrossRefGoogle Scholar
Rao, Sumangala P., and DiCarlo, Stephen E.. 2000. “Peer Instruction Improves Performance on Quizzes.” Advances in Physiology Education 24 (1): 5155.CrossRefGoogle ScholarPubMed
Rosenquist, J. Niels, Murabito, Joanne, Fowler, James H., and Christakis, Nicholas A.. 2010. “The Spread of Alcohol Consumption Behavior in a Large Social Network.” Annals of Internal Medicine 152 (7): 426.CrossRefGoogle Scholar
Rovai, Alfred P. 2003. “The Relationships of Communicator Style, Personality-Based Learning Style, and Classroom Community among Online Graduate Students.” The Internet and Higher Education 6 (4): 347–63.CrossRefGoogle Scholar
Shea, Peter. 2006. “A Study of Students' Sense of Learning Community in Online Environments.” Journal of Asynchronous Learning Networks 10 (1): 3544.Google Scholar
Summers, Jessica J., and Svinicki, Marilla D.. 2007. “Investigating Classroom Community in Higher Education.” Learning and Individual Differences 17 (1): 5567.CrossRefGoogle Scholar
Sweet, M., and Michaelsen, L. K.. 2012. Team-Based Learning in the Social Sciences and Humanities: Group Work That Works to Generate Critical Thinking and Engagement. Sterling, VA: Stylus Publishing.Google Scholar
VanderWeele, Tyler J. 2011. “Sensitivity Analysis for Contagion Effects in Social Networks.” Sociological Methods & Research 40 (2): 240–55.CrossRefGoogle ScholarPubMed
VanLehn, Kurt, Graesser, Arthur C., Jackson, G. Tanner, Jordan, Pamela, Olney, Andrew, and Rosé, Carolyn P.. 2007. “When Are Tutorial Dialogues More Effective than Reading?Cognitive Science 31 (1): 362.CrossRefGoogle ScholarPubMed
Whitehurst, Grover (Russ). 2010. “Education Research: Past, Present, and Future.” Policy Perspectives, 112. https://www.wested.org/cs/we/views/rs/1006.Google Scholar
Wiltermuth, Scott S., and Heath, Chip. 2009. “Synchrony and Cooperation.” Psychological Science 20 (1): 1.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1 Collaborative Video AnnotationThe lecture video on the left, slides on the top right, and discussion forum at the bottom right can all be resized. The solid line illustrates, for the purpose of this article, the connection between a point on the timeline and the comments. (Color online)

Figure 1

Figure 2 Collaborative Text AnnotationA screen shot of NB software, with a solid line added to highlight the connection between the assigned text (a) and discussion forum (b). (Color online)

Figure 2

Figure 3 Querying the Class E-mailClass e-mail archives (a) and getting an answer (b). (Color online)

Figure 3

Figure 4 Multiple Choice Question Delivered to a Student's SmartphoneSample question as seen on student smartphone (a), example class discussion groupings (b), and histograms for the instructor (and optionally to share with students) showing student responses before and after discussion (c). (Color online)

Figure 4

Figure 5 Median Percentage Point IncreasePercentage point increase in the percent of students giving the correct answer from before to after computer-assisted peer instruction. Medians and 10th and 90th percentiles averaged by discipline.