Utility of Complex Alternatives in Multiple-Choice Items: The Case of All of the Above

Document Type : Research Article

Authors

Shahid Rajaee Teacher Training University

Abstract

This study investigated the utility of all of the above (AOTA) as a test option in multiple-choice items. It aimed at estimating item fit, item difficulty, item discrimination, and guess factor of such a choice. Five reading passages of the Key English Test (KET, 2010) were adapted. The test was reconstructed in 2 parallel forms: Test 1 did not include the abovementioned alternative, whereas Test 2, administered 2 weeks later, included such an alternative. The 2 tests, 32 items each, were administered to 142 high school third-graders. Results, analyzed through 3-parameter logistic model, indicated that the multiple-choice questions, including the alternative all of the above, were easier. Results also revealed that the option all of the above increased the guess factor. Because guess factor is a source of measurement error, it may threaten test validity and reliability.

Keywords


Baker, F. B. (2001). The basics of item response theory. Maryland: RIC Clearinghouse on Assessment and Evaluation.
Bruno, E. J., & Dirkzwager, A. (1995). Determining the optimal number of alternatives to a multiple-choice test item: An information theoretic perspective. Educational and Psychological Measurement, 55(6), 959-966.
Burton, S. Sudweeks, Merrill, P., & Wood, B. (1991). How to prepare better multiple-choice test items: Guidelines for university faculty. Utah: Brigham Young University Testing Services and Department of Instructional Science.
Cambridge Key English Tests (2010). Cambridge: Cambridge University Press.
Crehan, K. D., Haladyna, T. M., & Brewer, E. W. (1993). Use of an inclusive option and the optimal number of options for multiple-choice items. Educational and Psychological Measurement, 53(1), 241-247.
DeMars, C. (2010). Item response theory. Oxford: Oxford University Press.
Dudycha, A. L., & Carpenter, J. B. (1973). Effects of item formats on item discrimination and difficulty. Journal of Applied Psychology, 58, 116-121.
Farhady, H., Jafarpur, A., & Birjandy, P. (2011). Testing language skills: From theory to practice. Tehran: Center for Studying and Compiling University books in Humanities (SAMT).
Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15, 309-334.
Harasym, P. H., Leong E. J., Violato, C., Brant, R., & Lorscheider, F. L. (1998). Cuing effect of all of the above on the reliability and validity of multiple-choice test items. Evaluation Health Professional, 21(1), 120-133.
Mousavi, S. A. (2009). An encyclopedic dictionary of language testing. Tehran: Rahnama Publications.
Mueller, D. J. (1975). An assessment of the effectiveness of complex alternatives in multiple-choice achievement test items. Educational and Psychological Measurement, 35, 135-141.
Musial, D., Nieminen, G., Thomas, J., & Bruke, K. (2009). Foundations of meaningful educational assessment. New York: McGraw-Hill.
Osterlind, S. J. (2002). Constructing test items: Multiple-choice, constructed-response, performance, and other formats. New York: Kluwer Academic Publishers.
Owen, S. V., & Freeman, R. D. (1987). What is wrong with three option multiple items? Educational and Psychological Measurement, 47, 513-22.
Pashasharifi, H., & Kiyamanesh, A. (1984). Shivehaye arzeshyabi az amookhtehaye danesh amoozan.Tehran: Sherkat-e Chap va Nashre Iran.
Rossi, J. S., McCrady, B. S., & Paolino Jr., T. J. (1978). A and B but not C: Discriminating power of grouped alternatives. Psychological Reports, 42(2), 13-46.
Tripp, A., & Tollefson, N. (1985). Are complex multiple-choice options more difficult and discriminating than conventional multiple-choice options? Journal of Nursing Education, 24(3), 92-98.