JOURNAL ARTICLE CRITIQUE CHECK LIST
Expected format:
Comprehension
Purpose
Results/Conclusions
Analysis and Evaluation
Strengths
Weaknesses
Synthesis and Evaluation
Reference
Instructor Courtney Perry
Psychology 150
2 October 2008
Journal Article Critique of
“An Empirical Investigation of Student Achievement and Satisfaction in Different Learning Environments”
Comprehension
Purpose
The purpose of “An Empirical Investigation of Student Achievement and Satisfaction in Different Learning Environments” by Jon Lim, May Kim, Steven S. Chen and Cynthia E. Ryder was to compare tested learning achievement and self-reported satisfaction levels of students in a wellness course using three different methods of instruction, online, traditional classroom, and a hybrid course “web-enhanced residential course” with both online and classroom time (113). The results would be used to support or detract from encouraging wellness courses to be taught online or via a combined method.
Results/Conclusions
The study found that students in both the online and hybrid courses showed significantly greater achievement than the traditional classroom course. The students in the hybrid course reported significantly greater satisfaction than both the online and classroom students. The online and classroom students showed no significant differences in satisfaction with the course (117). The study concluded that “a well-designed online course and a web-enhanced residential course can be effective in teaching wellness” (113).
Analysis and Evaluation
Strengths
The study goes to great effort to make the students’ experiences in the three courses as similar as possible, with the exception of the different delivery methods. Each group is taking the same course, with “the same instructor, requirements, learning objectives, and course materials such as exams, assignments and textbook” (115). In my experience with online courses, they typically have very different requirements and course materials, and courses with different instructors are likely to be evaluated based on the instructor rather than the delivery method. So standardizing the groups’ instruction and course work is very important towards isolating the delivery method as what is being evaluated in the study.
The method used for measuring achievement is another strength of this study. Rather than merely measuring final grades, the study gave a pre-course content knowledge test, the mean of which was then compared to the post-course content knowledge test mean to create an achievement level for each instructional delivery method (115). This method evaluates the actual increase in knowledge, and takes in to account previous knowledge of the course subject.
In addition to the two statistics for the purpose of the study (achievement and satisfaction), the study had the students report on quality of communication and support from the instructor (117-118). These variables are often linked to differences in online and classroom courses, so having this data available to compare to the results of the achievement and satisfaction can provide answers to possible queries on the effectiveness of online coursework such as questioning since the online students did do well in learning the coursework in the study sample, whether or not the online coursework could have provided sufficient communication or support if the students were having trouble.
Weaknesses
The authors of the study mention a weakness of the study, the lack of random assignment of students (118), but also mention it as a common problem in investigating online course work (114). Getting students randomly assigned to online and traditional courses would either be unethical in forcing the students to spend an entire semester in a instructional delivery method they did not choose themselves, extremely expensive to pay for students for an entire semester of research, or wouldn’t match the course work and environment of college students if they did simulated short courses.
Because of the strict sample requirements (same instructor, same course materials, same learning objectives), the study has a fairly small sample size of 153 students. The course sections were each close in size, with 31 in the online course, 40 in the hybrid and 82 in the two traditional courses. However, that only leaves 31 students in the online course (116). With the traditional courses being the control group, and the hybrid course sharing traits from both the control group and the experimental group, we really only have 31 students providing new information for our purpose. I cannot find any “common sense” problems the results may have from this, though there does appear to be one with the hybrid course. Of its 40 students, 15 are men, 25 are women (116). The hybrid course is the only one to show significantly higher self-reported satisfaction levels, and it is the only course with a significantly skewed gender demographic. There is also a noticeable sampling bias in age. The classroom and hybrid courses had a mean age of 20.4 and 20.8 years. The online course was significantly older with 30.3 mean age (116). This age sampling difference could result in very different expectations for a course, and therefore differences in self-reported satisfaction levels and quality of communication and support.
Finally, the course chosen limits the usefulness of the study results. Wellness courses are not generally a problem for students, and I would assume an exceedingly large percentage of students learn the material easily and are satisfied with their experience in such a class because of low expectations for the class. The study’s classroom results would support this, with fairly high achievement levels and satisfaction in the study’s course. So this study didn’t put online and hybrid instructional methods through a tough test. The results could be showing the lack of difficulty of the class, not any ability of online or hybrid courses to instruct students. The study author’s acknowledge this, though not directly, by limiting their conclusion to only “effective in teaching wellness” (113).
Synthesis and Evaluation
The most significant improvement I would suggest for this study is to expand its sample. I like the comparison between the instructional delivery methods being comparable with a single instructor, course and course materials, but I would expand the sample to include multiple such pairings, with different courses each with their own instructor. I would include some courses that were more difficult, where the traditional classroom achievement ratings would not be expected to be as high, courses usually only taken by students in that field of study. We may see more significant differences in the effectiveness of the different instructional delivery methods when there is more room for them to influence the results.
Combining the results of different courses would also supply us with a greater sample size, and leave us less likely to have a single course section with a skewed demographic balance affecting our analysis. The expanded study should also include statistical study based on the differences between a single course’s delivery methods before being merged for a total score. We need to be able to see if a specific field of study or course had an unusually large or divergent affect on the total.
With a sample selecting for itself its chosen instructional method, we are likely to end up with some sampling bias, even with a larger sample. The study should look for these sampling biases, and compare the differences in values from the biased demographics and look for statistical methods to more accurate report these results.
My last improvement would be in evaluating the communication and support. With the intention of selecting more difficult subject material, it’s even more important that the available communication and support be evaluated. The study does not mention anywhere what percentage of students listed a “not applicable” or similar response when surveyed on the quality of communication and support. There are some students that won’t make an effort to communicate or request support, and it should be noted as such. They should not be evaluating something they have no knowledge of, and I’d like to know whether the student’s got support when evaluating the achievement levels in the course.
Works Cited
Lim, Jon, et al. "An empirical investigation of student achievement and satisfaction in different learning environments." Journal of Instructional Psychology 35.2 (June 2008): 113-119. PsycINFO. EBSCO. Wake Technical Community College, Raleigh, NC. 27 Sep. 2008 <http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2008-09561-001&site=ehost-live>.