2.5 OVER VIEW OF PAPER PEN EXAMINATION AND COMPUTER BASED EXAMINATION
Literatures on the use of conventional examinations (paper-pen examinations, paper-pen testing, written examinations, paper-pencil examinations etc) and computer based examinations (computer based testing, e-examinations, electronic examinations, online assessment etc) dated as far back in the 1970’s in the developed Nations and had its histories while in the developing Nations like Nigeria, the computer based exams is just recently introduced in the 21st century. For example,
Bodmann and Robinson (2004) conducted an experimental study to compare speed and performances differences among CBTs and PPTs. In the experiment fifty-five undergraduate students enrolled in the subject of educational psychology, participated in the studies which were already familiar with CBTs. Both CBTs and PPTs contained 30 MCQs items with 35 minute of time limit. The findings observed that undergraduates completed the CBT faster than PBT with no difference in scores. Research outcomes have thus supported the fact that when students are motivated and testing conditions are equivalent, there are no differences between the scores obtained via CBT or PPT (Lynch, 1997 &Marson, Patry, and Berstein, 2001; cited by Alabi, Issa and Oyekunle, 2012). Both in and out of the classroom, students’ educational use of computers have also increased, particularly for writing and research (Becker, 1999; Russell, O’Brien, Bebell, and O’Dwyer, 2002). In the field of education, few comparative studies on test mode were carried out prior to 1986. However, an early 1985 study by Lee and Hopkins found the mean paper-and-pencil test score to be significantly higher than the mean computer-based test score in arithmetic reasoning. Results of this study highlight scratch work space as a salient factor in arithmetic test performance. In addition, Lee and Hopkins concluded that the inability to review and revise work affected performance, and argued that only “software that allows the conveniences of paper-and-pencil tests, e.g., the ability to change answers and the ability to review past items, be used in future applications†(p. 9). Collectively, early research on cross-modal validity of arithmetic reasoning tests provided mixed results: computers were found to enhance (Johnson and Mihal, 1973), as well as impede (Lee, Moreno, and Sympson, 1986) test performance.
Powers and O’neill (1993) did a related research and the purpose was to assess the degree of contribution Of CBT to performance among a number of students of mathematics and the method used was a Survey and their findings had No serious impact.
Bugbee (1996) concluded that the use of computers really affects testing; notwithstanding that CBT and PPT can be equivalent especially when the test developers take responsibility by showing how the equivalent can come by. He stated further that the barriers to the use of CBT are inadequate test preparation and failure to grasp the unique requirements for implementing and maintaining them; emphasizing that such factor as the design, development, administration and user characteristics needed to be considered in using CBT.
Schenkman, Fukuda and Persson (1999) identified one of the numerous variables that impact on student’s performance when questions are presented on a computer to be the quality of the monitor.
Millsap (2000) also carried out research and a study was conducted to investigate likely disparity between the two testing modes among 585 military recruits and the method used was Content analysis and their findings was that the mode did not affect the results.
Kirby, Winston et al. (2002) focused on student’s impatience (his time-discount behaviour) that influences his own academic performance. All of the research reviews support the hypothesis that student performance depends on different socio-economic, psychological, environmental factors. The findings of research studies focused that student performance is affected by different factors such as learning abilities because new paradigm about learning assumes that all students can and should learn at higher levels but it should not be considered as constraint because there are other factors like race, gender, sex that can affect student’s performance. (Hansen, Joe B.2000).
McVay (2002) did a research where he Examined disparity in students’ Performance between CBT and PBT testing modes and the method used was Comparing Performance of same students using the two modes and his findings was that Disparity exists.
Calarina and Wallace (2002) investigated to confirm several key factors in CBT versus PPT assessment. Factors of the study were content familiarity, computer familiarity, competitiveness, and gender. The study used a post-test only designed with one factor, test mode (Computer-based and Paper-based). Students’ score on 100-item multiple choice items and students’ self-report on a distance learning survey were treated as dependent variables. Four sections of Computer Fundamental Course consisting of 105 students were selected as sample of the investigations. Results showed that CBT delivery impacted positively on students’ scores as compared to PPT. The study found that the CBT group out-performed the PPT group. Gender, competiveness, and computer familiarity were not related to this performance difference, though content familiarity was.
On the impact of CBT on student attitudes and behaviours, Butler (2003) confirmed the association between a moderate number of tests and better student attitudes; especially that his respondents were found to be generally more positive toward the proctored, CBT facility than toward in-class, pencil and paper testing. Russell et al. (2003) carried out a research that Investigated reasons behind low performance in CBT and the method used was Survey and their findings concluded that, the low performance can be attributed to inability to practice/review past questions.
Johnson et al. (2004) did a research that examined the relationship between assessment mode and student’s perception and the method they used was Content analysis and the findings of their study was that Questions on computer seems to be harder than if it were to be on paper.
Ozden, Erturk and Sanli (2004) Students’ perception of online assessment and the method that they used was Questionnaire and interview among 46 students and their major findings was that students perceived it as an effective testing mode.
Jim and Sean (2006) concluded that the e-assessment can be justified in a number of ways. It can help avoid the meltdown of current paper-based systems; it can assess valuable life skills; and it can be better for users. For example, by providing on demand tests with immediate feedback and perhaps diagnostic feedback, and more accurate results via adaptive testing, it can help improve the technical quality of tests by improving the reliability of scoring. Therefore, a proper preparation of the students for the exam via an introduction to the software, a CBT could be a good method to curtail examinations malpractice effectively.
Williams (2007) did a research that examined the attitude of pre-hospital undergraduate students undertaking a web-based examination as an adjunct to the traditional paper based examination mode and the method he used was Survey questionnaire among 94Students and his findings was that there was high students’ satisfaction and performance.