Critical Research Review #1

Critical Research Review of:

Erhel, S. & Jamet, E. (2013). Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education, 67, 156-167.

 

1. This research study asserts that it is investigating the effects of instruction and feedback on deep learning and motivation when using digital game-based learning (DGBL). The statement is that the research is on learning instruction versus entertainment instruction with respect to promoting deeper learning without negatively impacting motivation. These statements, coupled with the statements throughout the article on the aim of the study, do not present a clear and concise problem to be explored.

The focus of the study appears to be clarified further at the beginning of the section explaining the first experiment. At this point, the authors stated, “The aim of this study was thus to ascertain whether the effect of instructions given during the reading phase that have been observed for text-based learning would also manifest themselves during DGBL” (p. 158). This does not, however, mention the inclusion or exclusion of feedback.

2. Often stated throughout the study is past research in the area of DGBL and whether it impacts deep learning. However, there are contradictions noted in referencing these previous studies. Within one paragraph alone (section 1.3), the authors state that many research studies concur on the potential for DGBL’s effectiveness, as well as saying that research results have indicated this effectiveness is questionable.

Despite the fact that the research question itself is not clearly stated, the area of DGBL is growing and presents a need for understanding best practices and instructors’ technological, pedagogical and content knowledge (TPACK). Exploring DGBL should continue and research should be done in determining its inclusion in education.

3. Instructions for use, as well as feedback, can impact a learner’s perception and motivation. It is difficult to determine whether other factors are at play when evaluating motivation. This would be best explored qualitatively through survey or interview. This study required students to complete questionnaires to assess their motivation after completing the learning experiment, but never presented the actual questionnaire or coding used for assessment. To collect and analyze quantitative data to determine knowledge and learning is acceptable.

4. The conceptual framework in this study is disjointed and often offering contradictory statements, not only between presented past research, but also to the assumptions made by the authors. The literature review is segmented and not presented in whole in one area of the article. It appears that an attempt to “cover the field” is being made, as many previous studies involving various aspects of DGBL are mentioned briefly, even if not relevant to this particular study.

The authors presented past research that is not on DGBL, only to state that the results may be similar to DGBL. In section 1.4, there is an in-depth discussion of incidental learning with a summary statement that it probably is the same in DGBL. Statements are made to show support from previous research even when it does not fully connect. A good portion of the introduction to experiment 2 discusses previous research done on feedback, focusing on corrective and explanatory feedback in particular. The authors then stated that unlike those presented, they would focus on knowledge of correct response (KCR) feedback, which they also stated had not been looked at in previous research with respect to its effects on cognitive processing in DGBL.

The expectations of the authors are scattered throughout the introduction, descriptions of experiments, analysis of results, and discussions. These are made most clear in the analysis of results, where they state whether each finding is consistent or contradictory to their expectations.

5. The author presents a wide array of theories and previous research on DGBL and motivation, even including contradictory statements. There is an extensive review of various studies, including meta-analysis, which the authors then stated are not as focused as what they proposed for their work. For example, the presentation of studies on feedback, corrective and explanatory, were not based on the KCR feedback that the authors chose for this study.

A section on motivation in the introduction discusses previous research on motivational benefits of DGBL, but does not address these with reference to instructions or feedback as this study is focused on. Some of the presented references, as in this case, are not solely relevant to the focused research problem as the experiments address.

6. The literature review is divided to address DGBL, the motivational benefits of DGBL, these benefits compared to conventional media, and the use of instructions in DGBL to improve learning. A review of literature on feedback and motivation is addressed later in the study, immediately before describing the 2nd experiment. Each section is discussed referencing past research, but many are contradicting within each section. Some have a summary statement to support the author’s assumptions and suggest the relevance and need for this study, but there is not an overall summary of the literature presented.

7. As addressed earlier, there is a lack of clarity in the research question for this study.  The introduction would have the reader believe that this study is focused on whether the use of games in education can be associated with deep learning. However, when the authors introduced the first experiment they stated that the aim was to determine the effects of instructions given in DGBL. Their actual research question is never clearly stated.

The hypotheses the author inserts are somewhat vague as well and scattered throughout the article. They are stated with the most clarity in the results where they compare their analysis with their expectations, and in the discussion that follows each experiment presentation. A clear statement in the beginning would help the reader to identify the significance of the experiments and results in relationship with the author’s hypotheses.

8. In the beginning of the article, a definition is given for DGBL that states it is a competitive activity. The program used in the study is more in line with an interactive video than game-based. The program, Appréhender par la Simulation les TRoubles liés à l’Age (ASTRA), is described as being a multimedia learning environment without any indication that it is competitive in nature. There are no rewards, levels, or scoring described as the participant moves through the ASTRA program. Inclusion of rewards or scoring would promote a competitive nature to fit their definition of DGBL.

The design of this study is based on two experiments with different samples for each. The first experiment was planned to explore the effect of instructions in DGBL on depth of learning and motivation of learners. The only independent variable in this study was the instructions given to learners before using the ASTRA program. One group was given instructions focused on learning while the other group was given instructions focused on entertainment. Data is collected and analyzed to look at each group’s recall, knowledge (paraphrase and inference), and motivation. This appeared to be in sync with what the authors had indicated they wanted to research in the initial description of Experiment One where it is stated, “The aim of this study was thus to ascertain whether the effects of instructions given during the reading phase that have been observed for text-based learning would also manifest themselves during DGBL” (p. 158).

The second experiment is stated as designed to study the effect of feedback in DGBL on learners’ cognitive processes and motivation. This experiment provided all participants with immediate KCR feedback during their use of the ASTRA program. The independent variable here was also two groups receiving different instructions. This is more of a test of instructions as this is the only difference between the two groups.

The initial statement in describing Experiment Two suggests that it is dependent on types of instruction, but this is not made clear in the introduction where the focus is more on feedback alone. Testing the effects of feedback alone would suggest that one group would have feedback withheld when the other does not, with all other parameters remaining consistent from group to group. Clarity in the initial statement of the problem to be researched would help to clear this confusion. This might also have been a more accurate study had the sample used for the experiments been the same group of participants.

9. The sampling for this study is briefly presented and changes are not explained. For the first experiment it is initially stated that there are 46 students recruited, 22 men and 24 women, none who were studying medical or allied health. The discussion on the pre-test questionnaire indicates that one male is eliminated due to a higher than acceptable score on previous knowledge. The participants, who should now number 45 in total, are then randomly divided, although the “random” process is never described, to complete the learning experiment. At this point, the study indicates that the group receiving learning instruction was 9 men and 15 women. It is stated that the group receiving entertainment instruction was also 9 men and 15 women, for a total of 18 men and 30 women, or 48 in total. These numbers are clearly inconsistent with the originally stated sample size and breakdown. It is not clarified if this is a misprint or not, which effects the validity of the results as we are not sure what the number of participants actually was.

Choosing students who were not enrolled in medical or allied health programs is the first step they took to eliminate those with previous knowledge of the material to be presented. The pre-test is the second step taken to determine and eliminate those with an unacceptable level of prior knowledge in the material to be presented in ASTRA to level the playing field for all participants. The pre-test and coding for scoring is not provided in this article.

The second experiment had a total of 44 participants, 16 men and 28 women, of which none were part of the sampling for the first experiment. Four of these participants were excluded in the pre-test phase. Nothing else is discussed as to the separation of this sample into the two groups used for the experiment.

10. The ASTRA program is a multimedia learning environment that they have labeled as DGBL. The description provided does not discuss a competitive activity to correspond with the definition used in the introduction for digital game-based learning. This discrepancy questions whether or not ASTRA is truly an acceptable platform to study DGBL. The lack of interactivity for the learner is a concern when labeling this as game-based.

The experiment design does not seem to clearly test what it stated it was meant to test. Experiment 1 has two groups with different instructions, learning and entertainment, both with no feedback. Experiment 2 has two groups with different instructions, learning and entertainment, both with KCR feedback. This breakdown may have been better tested with holding the instruction constant and varying the feedback in each, as well as having the same participants for both experiments. The fact that the two groups are not the same participants impacts the comparison of results as well.

The actual questions and coding results used for the questionnaires were not included in this article, so we cannot be assured of their adequacy. An explanation of these or attaching them in an appendix would have been advised.

11. The authors recognize that the quizzes in ASTRA are a limitation as they may be too easy, since they produced high results with no significant difference in the two experiments. This prompts us to question the reliability of the quizzes used.

The questionnaires used to measure the learner’s motivation and knowledge were not presented for review. The coding used to assess these was also not discussed or explained. We are not given explanation of who the raters are or what method they used. This makes the reliability of these measures questionable since we cannot view them for irregularities. We also must question the results as the initial sample size is not clear for Experiment One, nor are the group sizes made clear for Experiment Two.

With respect to validity, the sample itself is under scrutiny. It is not made clear as to how they were recruited and if they held a favorable predisposition to video games. This would also impact the group division which was assigned “randomly”, however this is not clarified further.

12. The analysis of data collected was completed using mean and standard deviation of scores on the knowledge questionnaires and quizzes, and groups were compared using ANOVA. Levine’s Test was done first to assure for homogeneity of variances between the two groups.

13. The authors’ final discussion addresses the methodology used as well as limitation that may have affected the results, particularly those obtained from the motivation questionnaire. The nature of the instructions given, both learning and entertainment, are acknowledged as possibly impacting the participants’ responses to the paraphrase-type questions, as neither of the instructions explicitly promoted them to memorize material presented.

The results from Experiment Two on motivation seem to baffle the researchers. They question as to why the participants who received the entertainment instruction had a lower fear of failure than those who received the learning instruction. In this case, the authors suggest further studies be done to determine factors affecting this. In fact, quite often throughout the final discussion, the authors suggest more research should be completed.

The authors presented three particular limitations: The ASTRA system, the quizzes in ASTRA, and the actual methodology they used. It is noted that the ASTRA system is not very interactive, and the results suggest that the quizzes within ASTRA may be too simple as the KCR feedback did not impact these results. Their critique of the methodology used is centered on the data collected. They suggest that further studies should be done where online data is collected. One limitation not addressed is the sample itself. The authors do not discuss limitations presented by random sampling and small sample choice.

14. The authors present the results and analysis from both experiments and discuss them in depth with references made to past research. They readily admit when results do not corroborate their hypotheses, but still report them in full. They quickly recognize when a result supports previous research and proceed to connect this with a statement referencing the past study.

The authors’ conclusions align with the results presented, and they address the findings with respect to their expectations and to previous research. The final discussion of results includes all findings and offers possible limitations to help explain results that do not support their hypotheses.

15. The authors connect the results quite often to the theoretical base when possible. Some of the results from the experiments conducted did not support their theory or were contradictory to theories they presented in the introduction which were also discussed in depth. One clear problem here, as stated earlier, is that a significant portion of the research discussed early in the article did not support their theory, nor did it solely pertain to their study. A more directed and cohesive introduction and literature review would have made connecting the results an easier task.

16. The introduction mentioned that many of today’s learners spend leisure time playing video games and that this has helped to popularize DGBL. This opens many doors in the research area as to how to best design and use DGBL, as well as how to identify characteristics of learners who will best excel in a DGBL environment.

In evaluating the two experiments, I questioned if the researchers considered the feedback as enhancing ASTRA to be more of a game-based program. With the addition of KCR feedback, the participant feels a reward similar to scoring in traditional games. This achievement creates a feeling of competition and motivates the user to continue. Without the feedback, as in Experiment One, those who received entertainment instructions would not fully view the program as a game and not feel motivated or excited as the instructions predicted a game environment. This learner perception is certainly worth further research.

Using the same group of participants in both experiments may have provided more insight as to the effects of feedback and instructions. This would enable a closer look at factors that influence deep learning in DGBL. Exploring the difference in participants’ results gives us a window into learners who will benefit from instruction type and feedback as well. Learning styles are important to consider when deciding to implement DGBL, as not all learner types will find this to be a motivating tool.

The results from Experiment Two are most intriguing for me. The addition of feedback, with potential to be viewed as reward or scoring, created a more game-like atmosphere that I would have expected those with the entertainment instruction to perform better. I think this combination of DGBL design is valuable to explore further.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s