STUDENTS’ PERCEPTION OF AUTO-SCORED ONLINE EXAMS IN BLENDED ASSESSMENT: FEEDBACK FOR IMPROVEMENT

Authors

DOI:

https://doi.org/10.5944/educxx1.19559

Keywords:

Higher Education, Feedback (Response), Student Surveys, Interviews, Alternative Assessment, Statistical Analysis.

Agencies:

Universitat Politècnica de València through the A25/14 Project (Convocatoria de Proyectos de Innovación y Convergencia de la UPV).

Abstract

Development of the information and communication technologies has led to an increase in the use of Computer Based Assessment (CBA) in higher education. In the last decade, there has been a discussion on online versus traditional pen-and-paper exams. The aim of this study was to verify whether students have reserves about auto-scored online exams, and if that is the case, to determine the reasons. The study was performed in the context of a blended assessment in which 1200 students were enrolled on a first-year physics university course. Among them, 463 answered an anonymous survey, supplemented by information obtained from an open-ended question and from interviews with students. Three factors (labelled ‘F1-Learning,’ ‘F2-
Use of Tool,’ and ‘F3-Assessment’) emerged from the quantitative analysis of the survey, and an additive scale was established. We found significant differences in the ‘F3-Assessment’ factor compared to the other two factors, indicating a lower acceptance of the tool for student assessment. It seems that even though students are used to computers, they have a lack of confidence in online exams. We carried out an in-depth survey on this topic in the form of an open-ended question and by interviewing a small group of 11 students to confer strength and nuance to the quantitative results of the survey. Although their comments were positive in general, especially on ease-of-use and on its usefulness in indicating the level achieved during the learning process, there was also some criticism of the clarity of questions and the strictness of the marking system. These two factors, among others, could have been the cause of the worse perception of F3-Assessment and the origin of the students’ reluctance towards online exams and automatic scoring.


Downloads

Download data is not yet available.

References

Ardid, M., Gómez-Tejedor, J.A., Meseguer-Dueñas, J.M., Riera, J., & Vidaurre, A. (2015). Online exams for blended assessment. Study of Different application methodologies. Computers & Education, 81, 296-303. https://doi. org/10.1016/j.compedu. 2014.10.010

Bain, K. (2004). What do they know about how we learn? What the Best College Teachers Do. Cambridge, MA: Harvard University Press.

Ballantine, J., Guo, X., & Larres, P. (2015). Psychometric evaluation of the Student Authorship Questionnaire: a confirmatory factor analysis approach. Studies in Higher Education, 40(4), 596-609. https://doi.org/10.1080/03075 079.2013.835910

Barbeite, F.G., & Weiss, E.M. (2004). Computer self-efficacy and anxiety scales for an Internet sample: testing measurement equivalence of existing measures and development of new scales. Computers in Human Behavior, 20(1), 1-15.

Brill, J.M., & Galloway, C. (2007). Perils and promises: University instructors’ integration of technology in classroom-based practices. British Journal of Educational Technology, 38(1), 95-105. https://doi.org/10.1111/j.1467-8535.2006.00601.x

Carless, D. (2015). Exploring learningoriented assessment processes. higher Education, 69(6), 963-976. https://doi. org/10.1007s10734-014-9816-z

Chao, K.-J. J., Hung, I.-C. C., & Chen, N.-S. S. (2012). On the design of online synchronous assessments in a synchronous cyber classroom. journal of Computer Assisted Learning, 28(4), 379-395. https://doi.org /10.1111/j.1365-2729.2011.00463.x

Coenders, G., Satorra, A., & Saris, W. E. (1997). Alternative Approaches to Structural Modeling of Ordinal Data: A Monte Carlo Study. Structural Equation Modeling-a Multidisciplinary Journal, 4(4), 261-282.

Debuse, J.C.W., & Lawley, M. (2016). Benefits and drawbacks of Computerbased assessment and feedback systems: Student and educator perspectives. British Journal of Educational Technology, 47(2), 294-301. Https://doi.org/10.1111/bjet.12232

Ellis, R.A., Goodyear, P., Bliuc, A.-M., & Ellis, M. (2011). High school Students’ experiences of learning through research on the internet. Journal of Computer Assisted Learning, 27(6), 503-515. https://doi.org/10.1111/j.1365-2729.2011.00412.x

Espasa, A., & Meneses, J. (2010). Analysing feedback processes in an online teaching and learning environment: an exploratory study. Higher Education, 59(3), 277-292. https://doi.org/10.1007/s10734-009-9247-4

Gibbs, G. (1999). Using assessment strategically to change the way students learn. In Assessment Matters in Higher Education (pp. 41-53). https://doi.org/10.1007/s13398-014-0173-7.2

Gipps, C. V. (2005). What is the role for ICT-based assessment in universities? Studies in Higher Education, 30(2), 171-180. https://doi.org/10.1080/03075070500043176

Hair, J.F., Black, W.C., Babin, B.J., Anderson, R.E., & Tatham, R.L. (2010). Multivariate Data Analysis. Prentice Hall.

Hewson, C. (2012). Can online coursebased assessment methods be fair and equitable? Relationships between students’ preferences and performance within online and offline assessments. Journal of Computer Assisted Learning, 28(5), 488-498.

Hwang, W.-Y., Hsu, J.-L., Shadiev, R., Chang, C.-L., & Huang, Y.-M. (2015). Employing self-assessment, journaling, and peer sharing to enhance learning from an online course. Journal of Computing in Higher Education, 27(2), 114-133. https://doi.org/10.1007/s12528-015-9096-3

Jassó, J., Milani, A., & Pallottelli, S. (2008). Blended e-Learning: Survey of On-line Student Assessment. In 2008 19th International Conference on Database and Expert Systems Applications (pp. 626-630). IEEE. https://doi.org/10.1109/DEXA.2008.115

Jawaid, M., Moosa, F.A., Jaleel, F., & Ashraf, J. (2014). Computer Based Assessment (CBA): Perception of residents at Dow University of Health Sciences. Pakistan Journal of Medical Sciences, 30(4), 688-91. https://doi.org/10.12669/pjms.304.5444

Jordan, S., & Mitchell, T. (2009). e-Assessment for learning? The potential of short-answer free-text questions with tailored feedback. British Journal of Educational Technology, 40(2), 371-385. https://doi.org/10.1111/j.1467-8535.2008.00928.x

Joreskog, K.G. (1990). New Developments in Lisrel - Analysis of Ordinal Variables using Polychoric Correlations and Weighted Least-Squares. Quality & Quantity, 24(4), 387-404.

Jöreskog, K.G., & Sörbom, D. (1999). LISREL 8 user’s guide. Chicago: Scientific Software International. Kline, T.J.B. (2005). Psychological Testing A Practical Approach to Design and Evaluation. SAGE Publications, Inc.

Kuo, C.-Y., & Wu, H.-K. (2013). Toward an integrated model for designing assessment systems: An analysis of the current status of computer-based assessments in science. Computers & Education, 68, 388-403. https://doi. org/10.1016/j.compedu.2013.06.002

Lafuente, M., Remesal, A., & Álvarez Valdivia, I. M. (2014). Assisting learning in e-assessment: a closer look at educational supports. Assessment & Evaluation in Higher Education, 39(March 2015), 443-460. https://doi.org/10.1080/02602938.2013.848835

Lawton, D., Vye, N., Bransford, J., Sanders, E., Richey, M., French, D., & Stephens, R. (2012). Online Learning Based on Essential Concepts and Formative Assessment. Journal of Engineering Education, 101(2), 244-287. https://doi.org/10.1002/j.2168-9830.2012.tb00050.x

Lee, J. (2014). An Exploratory Study of Effective Online Learning: Assessing Satisfaction Levels of Graduate Students of Mathematics Education Associated with Human and Design Factors of an Online Course. International Review of Research in Open and Distance Learning, 15(1).

Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 22, 1-55. Retrieved from http://www.sciepub.com/reference/113453

Llamas-Nistal, M., Fernández-Iglesias, M. J., González-Tato, J., & Mikic-Fonte, F. A. (2013). Blended e-assessment: Migrating classical exams to the digital world. Computers & Education, 62, 72-87. https://doi.org/10.1016/j.compedu.2012.10.021

Muthen, B., & Kaplan, D. (1992). A Comparison of some Methodologies for the Factor-Analysis of Nonnormal Likert Variables - a Note on the Size of the Model. British Journal of Mathematical & Statistical Psychology, 45, 19-30.

Netemeyer, R. G., Bearden, W. O., & Sharma, S. (2003). Scaling procedures: issues and applications. In Thousand Oaks (p. 219-261 Chap. 10). https://doi.org/10.4135/9781412985772

Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: a peer review perspective. Assessment & Evaluation in Higher Education, 39(1), 102-122. https://doi.org/10.1080/02602938.2013.795518

Noyes, J.M., & Garland, K.J. (2008). Computer- vs. paper-based tasks: Are they equivalent? Ergonomics, 51(9), 1352-1375.

Pacheco-Venegas, N.D., López, G., & Andrade-Aréchiga, M. (2015). Conceptualization, development and implementation of a web-based system for automatic evaluation of mathematical expressions. Computers & Education, 88, 15-28. https: / /doi .org/10.1016/ j.compedu.2015.03.021

Rosen, Y., & Tager, M. (2014). Making Student Thinking Visible through a Concept Map in Computer-Based Assessment of Critical Thinking. Journal of Educational Computing Research, 50(2), 249-270. https://doi.org/10.2190/EC.50.2.f

Roth, P.L. (1994). Missing Data - A Conceptual Review for Applied Psychologists. Personnel Psychology, 47(3), 537-560. https://doi.org/10.1111/j.1744-6570.1994.tb01736.x

Schermelleh-Engel, K., Moosbrugger, H., & Müller, H. (2003). Evaluating the Fit of Structural Equation Models: Tests of Significance and Descriptive Goodness-of-Fit Measures. Methods of Psychological Research Online, 8(2), 23-74. https://doi.org/10.1002/0470010940

Smaill, C.R. (2005). The implementation and evaluation of OASIS: A webbased learning and assessment tool for large classes. Ieee Transactions on Education, 48(4), 658-663.

Smith, J. G., & Suzuki, S. (2015). Embedded blended learning within an Algebra classroom : a multimedia capture experiment. Journal of Computer Assisted Learning, 31, 133-147. https://doi.org/10.1111/jcal.12083

Sun, P.-C., Tsai, R.J., Finger, G., Chen, Y.-Y., & Yeh, D. (2008). What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education, 50(4), 1183-1202.

Wilson, M., & Scalise, K. (2006). Assessment to improve learning in higher education: The BEAR assessment system. Higher Education, 52(4), 635-663.

Xiong, Y., So, H.-J., & Toh, Y. (2015). Assessing learners’ perceived readiness for computer-supported collaborative learning (CSCL): a study on initial development and validation. Journal of Computing in Higher Education, 27(3), 215-239. https://doi.org/10.1007/s12528-015-9102-9

Yang-Wallentin, F., Joreskog, K.G., & Luo, H. (2010). Confirmatory Factor Analysis of Ordinal Variables With Misspecified Models. Structural Equation Modeling-a Multidisciplinary Journal, 17(3), 392-423.

Yuan, J., & Kim, C. (2015). Effective Feedback Design Using Free Technologies. Journal of Educational Computing Research, 52(3), 408-434.

Zlatovic´, M., Balaban, I., & Kermek, D. (2015). Using online assessments to stimulate learning strategies and achievement of learning goals. Computers & Education, 91, 32-45. https://doi.org/10.1016/j.compedu.2015.09.012

Downloads

Published

2018-05-31

Issue

Section

Estudios