Critical Research Review of “Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study” by Ertmer et al. (2007).
1. Identify the clarity with which this article states a specific problem to be explored.
The focus of this research study was clearly and consistently presented in both the abstract and introduction. The authors stated this was an exploratory study and designed to determine if peer feedback in online discussions can be used as an instructional tool to increase the quality of postings. This was also made clear in the breakdown of three research questions stated in the section titled Purpose of the Study. In this section, the authors also stated that this study would help to fill a void in research on the impact of peer feedback.
2. Comment on the need for this study and its educational significance as it relates to this problem.
As stated in the authors’ literature review and introduction, effective discussions will cause the student to reflect and include critical thinking. Feedback on discussion posts can promote this higher level of thinking. Noted in the introduction was the importance of the use of discussions and how they lend themselves to constructing knowledge and understanding. The authors specifically mentioned Vygotsky’s (1978) Zone of Proximal Development and how students can have a deeper understanding through interaction with others (Ertmer et al., 2007, p.413).
A good discussion of the importance of feedback was made and supported. The authors presented seven key characteristics of good feedback according to Nicol and Macfarlane-Dick (2006). These characteristics were detailed and used to support the importance of feedback. They also gave a good argument as to using peer feedback as an instructional strategy to lighten the instructor’s workload while still providing students with prompt, constructive, supportive, and substantive feedback.
3. Comment on whether the problem is “researchable”? That is, can it be investigated through the collection and analysis of data?
The problem of determining the impact of peer feedback can certainly be investigated and researched. Because there are a variety of responses and feedback that students can give, this should be done through grading or coding these responses. An exploratory study such as this one looks at the level of responses and scores them according to a predetermined scale. This enables data to be collected and analyzed. A more detailed coding of feedback may be more appropriate and provide a better overview of the type of feedback that is most beneficial.
As for the students’ perceptions of value of feedback, the survey and interview methods were a good choice. By using surveys and interviews, the researcher becomes a human instrument to collect this type data that could not otherwise be gathered through quantitative measures (Hoepfl, 1997).
Theoretical Perspective and Literature Review
4. Critique the author’s conceptual framework.
The authors presented four specific pillars in their conceptual framework: (1) Role of Feedback in Instruction; (2) Role of Feedback in Online Environments; (3) Advantages to Using Peer Feedback; and (4) Challenges to Using Peer Feedback. Each of these were explored and presented in-depth in their own sections in the literature review. The authors spent time clarifying what previous research had defined as good feedback and how this is crucial for strengthening a student’s ability to self-regulate. The authors discussed the need for feedback to help improve the construction of knowledge which ties in with the discussion in the introduction supported by Roehler & Cantlon, 1997. They effectively pulled in Vygotsky’s Zone of Proximal Development to support the idea of increased learning through discussion and interaction with others.
Instructors using peer feedback for prompt feedback was posed as a possible solution to avoid increased workload when dealing with a large volume of online postings. This connects with the problem to be studied, but the authors did not offer any evidence of previous studies that point to this as being a successful alternative to teacher feedback as this is part of the focus on this study. This may be better placed in the section on advantages to using peer feedback instead of its current position under role of feedback in online environments.
Advantages and challenges to using peer feedback were presented with references to previous studies that indicate this promotes skills needed to assess one’s own work and learning, thus building autonomous learners. The authors addressed student perceptions in these sections as well which fits nicely in with the research questions posed. It is discussed and supported that students may feel anxiety and that their feedback may be invalid as they are not as knowledgeable as the instructor.
Overall the authors did a nice job connecting the role of feedback, its importance for increasing quality, and how using peer feedback as an instructional strategy may help to fulfill this need for critical, timely feedback.
5. How effectively does the author tie the study to relevant theory and prior research? Are all cited references relevant to the problem under investigation?
The authors presented a good base of prior research to support and connect their theory that the instructional strategy of using peer feedback may increase quality of student postings. Research was cited that discussed the building of skills needed to assess one’s own learning, and how this reflection of their work and their peers’ work can lead to increased quality in their own (Ertmer et al., 2007, p. 415). The authors cited research from McConnell (2002) on autonomous learners and building skills needed to assess one’s own work and learning through collaborative assessment. This is a nice added support for peer feedback and its possible benefit of increased learning.
The article stated and was supported by Ertmer & Stepich (2004) that research has shown the impact of feedback on quality of posts, specifically that it must be timely, consistent, constructive, and ongoing feedback. This again relates to the overall value of feedback. The authors stated that little, if any, research has been done on the impact of peer feedback on quality of postings as this study is focused on.
Quite a bit of theory is presented to support the authors’ claim that the instructional strategy of using peer feedback may reduce the workload of instructors and still provide quality feedback. This however is not actually a research question presented, although does blend with what they studied and what is presented in the literature review. Overall the relevant studies presented tie in with the study effectively.
6. Does the literature review conclude with a brief summary of the literature and its implications for the problem investigated?
The literature review addressed each of the four pillars in the authors’ framework and concluded with a section on the purpose of the study where three research questions were presented. In this “Purpose of the Study” section, the authors addressed the lack of studies on the impact of peer feedback on the quality of students’ postings. They did not actually summarize the previously presented literature review. A paragraph or two here to summarize and pull together each of the areas they presented would be helpful and would nicely lead into the research questions stated. Without this, the sections between the “Introduction” and the “Purpose of the Study” are a little disjointed. The implications for the problem investigated were mentioned throughout the sections before the purpose paragraph. These would be worth mentioning again in a short summary to tie them all together with the research questions.
7. Evaluate the clarity and appropriateness of the research questions or hypotheses.
Three research questions are posed in a clear and concise manner. These questions address the impact of peer feedback on learning, the students’ perceptions of receiving peer feedback versus teacher feedback, and their perceptions of giving peer feedback. These research questions encompass the purpose of the study and relate nicely to the literature review and conceptual framework.
Research Design and Analysis
8. Critique the appropriateness and adequacy of the study’s design in relation to the research questions or hypotheses.
This study was conducted as a case study using both descriptive and evaluative approaches. The authors stated that this was considered an appropriate method due to their focus on a “contemporary phenomenon within a real-life context” according to Yin (2003). This statement is correct as they are aiming to describe characteristics of students and their perceptions, as well as the impact of peer feedback on quality of work. This method enabled the researchers to do an in-depth study of a small group of students partaking in the study in order to gain insight as to their feelings on feedback. This approach combined both descriptive to determine their perceptions, and evaluative to examine the impact peer feedback had on the quality of postings.
A descriptive approach is appropriate for focusing on the “what” portion of the problem. This falls in line with the research questions posed on students’ perceptions of the value of both receiving and giving peer feedback as well as comparing this to instructor feedback. Descriptive methods often utilize surveys and interviews to report how things are, describing the characteristics or attributes (Randolph, 2007). This study did exactly that, using pre- and post-surveys as well as interviews to address the 2nd and 3rd research questions on student perceptions of giving and receiving peer feedback, and comparing the value of peer feedback to instructor feedback.
The evaluative approach is suitable for evaluating impact of an intervention or program and determining if the desired end result is achieved. This was an appropriate method to use for the research questions of determining the impact of peer feedback and if it will increase the quality of student postings.
9. Critique the adequacy of the study’s sampling methods (e.g., choice of participants) and their implications for generalizability.
The sample size chosen for this study included 15 graduate-level students enrolled in one course. It was appropriate to have students from one course for the design of the study in order to do a small group case study. It was indicated that these students were in the education field and familiar with Bloom’s Taxonomy levels which was necessary and convenient as this was used for scoring posts in the peer feedback.
The small size of only 15 participants is not fully indicative of all learners and thus makes the results less generalizable to larger and more diverse (age, field of study, etc.) learners. The results obtained here by graduate-level students may be quite different from that produced by an undergraduate-level group of students. The reader was not made aware of the method used for choosing this particular class or students. Choosing a sample that’s a good representation of the population helps with generalizability (Randolph, 2007, p.42). This may be a good sample of graduate students in education, but it is not a sample of all students in online courses.
10. Critique the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).
The overall procedure of this study was good with different qualitative and quantitative tools for collecting data. This study began with instructor-feedback only in order to model the expected responses and procedure. After week 5, the students took a pre-survey on their perceptions of feedback and its importance, as well as whether the instructor feedback had met their expectations. This process provided a good base to compare to after the intervention of peer feedback had taken place. After the students provided peer feedback for several weeks, a post-survey was administered to again obtain information on the students’ perceptions of giving and receiving peer feedback and comparing this to instructor feedback. These surveys were both Likert-style and open-ended to ensure a deeper exploration of the topics. However, the actual surveys were not appended to the study in full. Some example questions were presented, but readers are not able to view the entire survey.
Participants were also interviewed at the end of the course to examine their thoughts on the peer feedback process to gain more insight. However, readers are not able to view the full interview questions to determine any possible issues. Data from these interviews and the scores on students’ postings, both before and after the peer feedback weeks, were triangulated. The authors stated that they used interview protocol; however, the full interview questions were not made available to the reader. The authors used a set of standardized codes and qualitative analysis software to analyze the interview responses.
11. Critique the appropriateness and quality (e.g., reliability, validity) of the measures used.
Validity and reliability issues were addressed in their own section. The authors pointed out several areas of concern where they took appropriate measures to increase validity and reliability of results. Sample postings and scores were provided to students prior to giving peer feedback, and instructor modeling of grading using the Bloom’s taxonomy scoring rubric was provided to help eliminate validity concerns. The participants were already familiar with Bloom’s taxonomy as well. The triangulation of data was also used for this purpose. Multiple interviewers and evaluators were used in order to eliminate biases. It’s mentioned in this section that standardized interview protocol was used to ensure reliability in the interview data, but as mentioned previously, these interview questions were not included in the study article. The authors used check-coding for inter-rater reliability. This is particularly crucial in the rating of interview data to determine if the raters will be reliable in each case (Drost, 2011). However, it was not discussed as to how reliability is addressed among the students in doing peer grading with the provided rubric other than their previous familiarity with Bloom’s taxonomy.
The method for choosing the sample was not explained or addressed. For validity in forming an inference of results from the sample to a whole population, the sample must be a good representation of the population (Randolph, 2007). This representation is best selected randomly. However, the reader cannot determine if this was a random sampling as the authors did not discuss this. Because of the study design, a random sample may have been impractical to use since all participants needed to be enrolled in the course. This could be considered a purposive sampling and argued that it is typical of the population studied (Randolph, 2007). An explanation of this sample choosing would be a good addition to this section for validity.
Although there are some concerns for the study that are not presented, overall the authors did a good job explaining how many of the concerns for validity and reliability were addressed.
Interpretation and Implications of Results
13. Critique the author’s discussion of the methodological and/or conceptual limitations of the results.
The authors addressed three areas that presented limitations to their study in a designated section. The small sample size was certainly a limitation that impacted the generalizability of the results. Furthermore, as addressed earlier, this sample was taken from a very narrow population scope. A small group of graduate students who are already familiar with Bloom’s taxonomy used in the scoring rubric does not lend itself to transference to all students in online discussion environments. They also discussed “the relatively short duration of the study, and the fairly limited scale used to judge the quality of student postings” (Ertmer et al., 2007, p.428). The authors recommended a more extended scale in future studies in order to better evaluate increased quality of student postings.
Students’ perceptions and feelings on giving and receiving feedback could also be considered a limitation as they were uncomfortable with judging classmates’ ideas, as well as the impact of peer scoring on overall grades. The discussion questions were also a concern as some were not designed to bring about higher-order thinking. The authors mentioned this in the discussion section earlier.
14. How consistent and comprehensive are the author’s conclusions with the reported results?
The authors mentioned quite a few times throughout their discussion that their findings showed no significant improvement in the quality of students’ postings throughout the course. However, they state at the beginning of their discussion that the results from this study “support the assumption that students’ postings can reach and be sustained at a high level of quality through a combination of instructor and peer feedback” (Ertmer et al., 2007, p.425). This seems inconsistent with the analyzed data from the scoring of posts using the Bloom’s taxonomy rubric, and with their own statements of no improvement. They also gave an in-depth explanation of quite a few factors that would explain this lack of growth. Despite all of this, they stated that students indicated, in surveys and interviews, that receiving peer feedback had a positive impact on their own quality of posts. This may be where they are drawing the assumption of support from but should clarify this in their conclusion as to avoid confusion.
The authors did a nice job reporting the students’ perceptions of feedback, both instructor and peer, reported from the surveys and interviews. They stated that students’ perceptions of feedback remained the same over the course of the semester, ranking the value of instructor feedback to be higher than that of peers before and after the intervention process.
15. How well did the author relate the results to the study’s theoretical base?
Overall the authors did a nice job connecting the results and discussion of results to their theoretical base presented. The data from the surveys and interviews summarized the students’ perceptions of feedback stating that it must be timely, encouraging, and of quality to help them learn. This was also mentioned throughout the theoretical base in the literature review. The biggest connection is in concluding the possible advantage of peer feedback as reducing the instructor’s workload while simultaneously increasing learning and quality postings from students. This was presented in-depth in the conclusion, listing suggestions for instructors to incorporate when designing their discussion boards to include peer feedback. This idea was also strongly presented in the beginning of the article in the sections titled “Role of Feedback in Online Environments” and “Advantages to Using Peer Feedback” (Ertmer et al., 2007, p.414).
The initial theories presented challenges when using peer feedback, including students’ anxiety and reliability when giving and receiving peer feedback. The authors connected nicely to this in their discussion of results from the “Perceived Value and Impact of Giving Peer Feedback” section. They included student comments stating many of them did not feel comfortable giving a classmate a zero score, and the data backed this up with only 4% of scores being a zero.
16. In your view, what is the significance of the study, and what are its primary implications for theory, future research, and practice?
This study raises interesting questions for future studies on peer feedback in online discussion forums. Raising the quality of students’ posts and providing them with timely, valuable feedback is certainly a desired result. However, this study has limitations that impact its generalizability. Because of the very narrow scope of sample, and the issues with reliability of peer feedback, this study is not highly significant. It offers good insight into graduate level education students’ perceptions of feedback, but it does not provide implications for undergraduate level students in other fields. A much broader scoped study with a more generalized population sample would need to be completed with less limitations.
Drost, E. (2011). Validity and Reliability in Social Science Research. Education Research and Perspectives, 38(1), 105 – 123.
Ertmer, P.A., Richardson, J.C., Belland, B., Camin, D., Connolly, P., Coulthard, G., Lei, K., & Mong, C. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication 12, 412-433.
Hoepfl, M.C. (1997). Choosing qualitative research: A primer for technology education researchers. Journal of Technology Education, 9(1), 47-63.
Randolph, J.J. (2007). Multidisciplinary Methods in Educational Technology Research and Development, HAMK Press.