바로가기메뉴

본문 바로가기 주메뉴 바로가기

logo

How differential credit assignment affects students' responses in the Constructive Multiple-choice Testing System

The Korean Journal of Cognitive and Biological Psychology / The Korean Journal of Cognitive and Biological Psychology, (P)1226-9654; (E)2733-466X
2010, v.22 no.4, pp.573-588
https://doi.org/10.22172/cogbio.2010.22.4.008

  • Downloaded
  • Viewed

Abstract

The Constructive Multiple-choice Testing (CMT) system is a new computerized testing system developed to supplement the weaknesses of the multiple-choice (MC) format. The CMT system involves having the examinee respond to the stem first in the short answer format and then in the MC format. Therefore, one can see whether or not the examinee chose the correct option in the MC format because he or she actually knew the answer by checking the short answer portion of the examinee's response. The current study was carried out to examine whether there is any difference in the scores obtained from the CMT test as opposed to the short answer or the MC tests. Two hundred and twenty seven 6th graders in elementary school were randomly assigned to 2 groups. In Experiments 1 and 2, comparison was made between the performance of the group who took the test in the CMT format and that of the group who took the test in the short answer format. In Experiment 1, where the instruction for the CMT test was that the two portions would be graded separately, the mean of the short answer portion of the CMT group was lower than that of the short answer format group. However, in Experiment 2, the examinees were told that the short answer portion would be weighed 9 times as heavily as the multiple-choice portion (90% vs. 10%). There was no difference in the means of the short answer responses between the two groups. In Experiment 3, the means of the multiple-choice responses were compared between the CMT group and the multiple-choice group. The CMT group was given the same instruction as in Experiment 2. The result revealed that the mean of the MC portion of the CMT group was higher than that of the MC group. These results suggest that the performance is not affected by having the examinees respond twice if more points are allotted to the short answer portion of the CMT format.

keywords
컴퓨터화 검사, 구성적 선다형 검사 방식, 반응 동등성, 단답식, 선다형, Computerized Testing, Constructive Multiple-choice Testing System, Response Equivalence, Short Answer Format, Multiple-choice Format

Reference

1.

박도순, 김종필, 양길석 (2002). 컴퓨터검사와 지필검사의 점수 동등성에 관한 메타분석, 교육평가연구, 15(1), 247-272.

2.

박주용, 민경석 (2009). 구성적 선다형 검사에서 선다형과 단답형의 문항 특성 비교. 교육평가연구, 22(2), 451-469.

3.

최윤정, 성태제 (2006). 영어 논술 채점 컴퓨터 프로그램의 비교분석, 교육평가연구, 19 (1), 145-160.

4.

Bennett, R.E., Braswell, J., Oranje, A., Sandene, B., Kaplan, B. & Yan, F. (2008). Does it Matter if I Take My Mathematics Test on Computer? A Second Empirical Study of Mode Effects in NAEP. Journal of Technology, Learning, & Assessment. 6 (9). Retrieved July 25, 2009, from http://escholarship.bc.edu/jtla/.

5.

Berg, C. A., & Smith, P. (1994). Assessing students' abilities to construct and interpret line graphs: Disparities between multiple- choice and free-response instruments. Science Education, 78 (6), 527-554.

6.

Dikli, S. (2006). An Overview of Automated Scoring of Essays. Journal of Technology, Learning, & Assessment. 5 (1). Retrieved June 12, 2008, from http://escholarship.bc.edu/jtla/.

7.

Downing, S. M. (2006). Selected-response item formats in test development. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development (pp.287-301). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

8.

Leacock, C., & Chodrow, M. (2003). C-rater: Automatied scoring of the short-answer questions. Computers and the Humanities, 37, 389-405.

9.

Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational Measurement (4th ed., pp.13-104). New York: Macmillan.

10.

Park, J. (2010). Constructive multiple-choice testing system. British Journal of Educational Technology, 41(6), 1054-1064.

11.

Rodriguez, M. C. (2003). Construct equivalence of multiple-choice and constructed-response items: a random effects synthesis of correlations, Journal of Educational Measurement, 40(2): 163-184.

12.

Veloski, J. J., Rabinowitz, H. K., Robeson, M. R., & Young, P. R. (1999). Patients don't present with five choices: An alternative to multiple-choice tests in assessing physician's competence. Academic Medicine, 74, 539-546.

13.

Wainer, H., & Thissen, D. (1993). Combining multiple choice and constructed response test scores: Toward a Marxist theory of test construction. Applied Measurement in Education, 6, 103-118.

14.

Wang, J., & Brown, M. S. (2007). Automated Essay Scoring Versus Human Scoring: A Comparative Study. Journal of Technology, Learning, & Assessment. 6(2). Retrieved May 10, 2008, from http://escholarship.bc.edu/jtla/.

The Korean Journal of Cognitive and Biological Psychology