International Association of Educators   |  ISSN: 1949-4270   |  e-ISSN: 1949-4289

Original article | Educational Policy Analysis and Strategic Research 2020, Vol. 15(1) 7-21

Investigation of the Use of Electronic Portfolios in the Determination of Student Achievement in Higher Education Using the Many-Facet Rasch Measurement Model

Mehmet Şata & İsmail Karakaya

pp. 7 - 21   |  DOI: https://doi.org/10.29329/epasr.2020.236.1   |  Manu. Number: MANU-1910-17-0003.R3

Published online: March 24, 2020  |   Number of Views: 194  |  Number of Download: 709


Abstract

This study aimed to determine the rater behavior in the evaluation process of student electronic portfolios used to measure student achievement in higher education, and thus to evaluate the usability of the electronic portfolio system. Considering that rater behavior adversely affects both validity and reliability in determining the performance of individuals, it is important to identify the effect of this factor and evaluate the related results in line with this effect. The data of the study were collected from the students enrolled in an English language teaching program at Gazi University Gazi Education Faculty within the scope of the measurement and assessment course in the fall semester of 2017-2018. An analytic rubric developed by the researchers was used in the evaluation of the student electronic portfolios. The study included two participants groups consisting of three raters and 61 students (11 male, 50 female). In the analysis of the data, the many-facet Rasch measurement model was used as an analysis method since it was appropriate for the nature of the current data set. When the findings of the study were examined, it was found that one or more rater behaviors interfered with the performance of the individual in the use of non-objective measurement tools, and consequently negatively affected the validity and reliability of the measurements. In conclusion, it can be stated that the individual’s performance related to electronic portfolios in higher education is generally affected by the rater behavior in the evaluation process independent of the measurement tool. In addition, it has been confirmed that electronic portfolios can be used to determine individual performance in higher education.

Keywords: Electronic portfolios, rater behavior, higher education, many-facet Rasch, validity.


How to Cite this Article?

APA 6th edition
Sata, M. & Karakaya, I. (2020). Investigation of the Use of Electronic Portfolios in the Determination of Student Achievement in Higher Education Using the Many-Facet Rasch Measurement Model . Educational Policy Analysis and Strategic Research, 15(1), 7-21. doi: 10.29329/epasr.2020.236.1

Harvard
Sata, M. and Karakaya, I. (2020). Investigation of the Use of Electronic Portfolios in the Determination of Student Achievement in Higher Education Using the Many-Facet Rasch Measurement Model . Educational Policy Analysis and Strategic Research, 15(1), pp. 7-21.

Chicago 16th edition
Sata, Mehmet and Ismail Karakaya (2020). "Investigation of the Use of Electronic Portfolios in the Determination of Student Achievement in Higher Education Using the Many-Facet Rasch Measurement Model ". Educational Policy Analysis and Strategic Research 15 (1):7-21. doi:10.29329/epasr.2020.236.1.

References
  1. Abu Kassim, N. L. (2011). Judging behaviour and rater errors: an application of the many-facet rasch model. GEMA Online Journal of Language Studies, 11(3), 179-197. [Google Scholar]
  2. Ada, S., Tanberkan-Suna, H., Elkonca, F., & Karakaya, İ. (2016). Views of academicians, school administrators, and teachers regarding the use of e-portfolios in transition from elementary education to secondary education. Educational Sciences: Theory and Practice, 16(2), 375-397. [Google Scholar]
  3. Bahar, M., Nartgün, Z., Durmuş, S., & Bıçak, B.( 2006). Geleneksel- Alternatif : Ölçme ve Değerlendirme Öğretmen El Kitabı, Ankara: Pegem A Yayıncılık.  [Google Scholar]
  4. Barak, M., Ben-Chaim, D. & Zoller, U. (2007). Purposely teaching for the promotion of higher-order thinking skills: A case of critical thinking. ResSciEduc, 37, 353–369. [Google Scholar]
  5. Barker, K. C. (2005). E-portfolio for the assessment of learning. Retrieved from http://www.futured.com/documents/FuturEdePortfolioforAssessmentWhitePaper.pdf [Google Scholar]
  6. Bernardin, H. J., & Buckley, M. R. (1981). Strategies in rater training. Academy of Management Review, 6(2), 205-212. [Google Scholar]
  7. Boddy, N., Watson, K., & Aubusson, P. (2003). A trial of the five Es: A referent model for constructivist teaching and learning. Research in Science Education, 33, 27–42. [Google Scholar]
  8. Brennan, R.L., Gao, X., & Colton, D.A. (1995). Generalizability analyses of work key listening and writing tests. Educational and Psychological Measurement, 55(2), 157-176. [Google Scholar]
  9. Campbell, M. I., & Schmidt, K. J. (2005). Polaris: An undergraduate online portfolio system that encourages personal reflection and career planning. International Journal of Engineering Education, 21(5), 931–942. [Google Scholar]
  10. Chang, C. C., Tseng, K. H., Chou, P. N., & Chen, Y. H. (2011). Reliability and validity of Web-based portfolio peer assessment: A case study for a senior high school’s students taking computer course. Computers & Education, 57(1), 1306-1316. doi:10.1016/j.compedu.2011.01.014 [Google Scholar] [Crossref] 
  11. Chen, W.H., & Thissen, D. (1997). Local dependence indices for item pairs using item response theory. Journal of Educational and Behavioral Statistics, 22, 265-289. [Google Scholar]
  12. Congdon, P., & McQueen, J. (2000). The stability of rater severity in large-scale assessment programs. Journal of Educational Measurement, 37(2), 163-178. [Google Scholar]
  13. Çetin, B., & İlhan, M. (2017). Standart ve SOLO Taksonomisine Dayalı Rubrikler ile Puanlanan Açık Uçlu Matematik Sorularında Puanlayıcı Katılığı ve Cömertliğinin İncelenmesi. Eğitim ve Bilim, 42(189), 217-247. [Google Scholar]
  14. Dunbar, N.E., Brooks, C.F., & Miller, T.K. (2006). Oral communication skills in higher education: Using a performance-based evaluation rubric to assess communication skills. Innovative Higher Education, 31(2), 115-128. [Google Scholar]
  15. Ebel, R. L. (1965). Measuring educational achievement. New Jersey: Prentice- Hall Press [Google Scholar]
  16. Ebel, R.L., & Frisbie, D.A. (1991). Essentials of educational measurement. New Jersey: Prentice Hall Press. [Google Scholar]
  17. Eckes, T. (2015). Introduction to many-facet Rasch Measurement: Analyzing and evaluating rater-mediated assessments. Frankfurt: Peter Lang Edition.  [Google Scholar]
  18. Egan, J. P. (2012). E-portfolio formative and summative assessment: Reflections and lessons learned. In Proceedings of Informing Science & IT Education Conference (InSITE). [Google Scholar]
  19. Farrokhi, F., Esfandiari, R., & Schaefer, E. (2012). A many-facet Rasch measurement of differential rater severity/leniency in three types of assessment. JALT Journal, 34(1), 79-101. [Google Scholar]
  20. Farrokhi, F., Esfandiari, R., & Vaez Dalili, M. (2011). Applying the many-facet Rasch model to detect centrality in self-assessment, peer-assessment and teacher assessment.1 World Applied Sciences Journal, 15(11), 76-83. [Google Scholar]
  21. Gronlund, N. E. (1977). Constructing achievement test. New Jersey: Prentice-Hall Press. [Google Scholar]
  22. Haladyna, T. M. (1997). Writing test items in order to evaluate higher order thinking. Needham Heights: Allyn & Bacon [Google Scholar]
  23. Hung, S. T. A. (2012). A wash back study on e-portfolio assessment in an English as a foreign language teacher preparation program. Computer Assisted Language Learning, 25(1), 21–36. [Google Scholar]
  24. İlhan, M. (2015). Standart ve SOLO Taksonomisine Dayalı Rubrikler ile Puanlanan Açık Uçlu Matematik Sorularında Puanlayıcı Etkilerinin Çok Yüzeyl Rasch Modeli ile İncelenmesi. Doktora Tezi, Gaziantep. [Google Scholar]
  25. Jenson, J. D. (2011). Promoting self-regulation and critical reflection through writing students’ use of electronic portfolio. International Journal of e-Portfolio, 1(1), 49–60. [Google Scholar]
  26. Johnson, R. L., Penny, J. A., & Gordon, B. (2008). Assessing performance: Designing, scoring, and validating performance tasks. New York: Guilford Press. [Google Scholar]
  27. Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational research review, 2(2), 130-144. [Google Scholar]
  28. Kondo-Brown, K. (2002). A FACETS analysis of rater bias in measuring Japanese second language writing performance. Language Testing, 19(1), 3-31. [Google Scholar]
  29. Kubiszyn, T., & Borich, G. (2013). Educational testing and measurement. New Jersey: John Wiley & Sons Incorporated. [Google Scholar]
  30. Kutlu, Ö., Doğan, C.D. & Karaya, İ. (2014). Öğrenci başarısının belirlenmesi/ performansa ve portfolyoya dayalı durum belirleme. Pegem Akademi Yayınları: Ankara. [Google Scholar]
  31. Linacre, J. M. (2012). FACETS (Version 3.70.1) [Computer Software]. Chicago, IL: MESA Press. [Google Scholar]
  32. Linacre, J. M. (2017). FACETS (Version 3.80.0) [Computer Software]. Chicago, IL: MESA Press. [Google Scholar]
  33. Lumley, T., & McNamara, T. F. (1995). Rater characteristics and rater bias: Implications for training. Language Testing, 12(1), 54-71. [Google Scholar]
  34. McDonald, R. P. (1999). Test theory: A unified approach. Mahwah, NJ: Lawrence Erlbaum. [Google Scholar]
  35. McHugh, M. L. (2012). Interrater reliability: the kappa statistic. Biochemia medica, 22(3), 276-282. [Google Scholar]
  36. Messick, S. (1996). Validity of performance assessments. In G. Phillips (Ed.), Technical issues in large-scale performance assessment (pp. 1–18). Washington, DC: National Center for Education Statistics. [Google Scholar]
  37. Myford, C. M., & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet Rasch measurement: Part I. Journal of Applied Measurement, 4(4), 386-422. [Google Scholar]
  38. Myford, C.M., & Wolfe, E.W. (2004). Detecting and measuring rater effects using many-facet Rasch measurement: Part II. Journal of Applied Measurement, 5(2), 189-227. [Google Scholar]
  39. Oosterhof, A. (2003). Developing and using classroom assessments. New Jersey: Merrill-Prentice Hall Press. [Google Scholar]
  40. Riedler, M. & Eryaman M.Y.  (2016). Complexity, Diversity and Ambiguity in Teaching and Teacher Education: Practical Wisdom, Pedagogical Fitness and Tact of Teaching. International Journal of Progressive Education. 12(3): 172-186 [Google Scholar]
  41. Royal, K. D., & Hecker, K. G. (2016). Rater errors in clinical performance assessments. Journal of veterinary medical education, 43(1), 5-8. [Google Scholar]
  42. Shaidullina, A. R., Fassakhova, G. R., Valeyeva, G. K., Khasanova, G. B., Komelina, V. A., & Ivanova, T. L. (2014). A comparative research on levels of students’ formation skills of their career advancement portfolio in secondary and higher education systems. Asian Social Science, 11(1), 375–379. [Google Scholar]
  43. Sim, J., & Wright, C. C. (2005). The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Physical therapy, 85(3), 257-268. [Google Scholar]
  44. Tubaishat, A. (2015). Can e-portfolio improve students’ readiness to find an IT career?. Issues in Informing Science and Information Technology, 12, 192–203. [Google Scholar]
  45. Watts, M., Jofili, Z., & Bezerra, R. (1997). A case for critical constructivism and critical thinking in science education. Research in Science Education, 27(2), 309–322. [Google Scholar]