Automated Grading Model with Adjusted Level of Lenience for Short Answer Questions using Natural Language Processing

S Zindove; S Chaputsira1

1

Publication Date: 2024/08/20

Abstract: Automated grading of short answer questions is a challenging task that involves understanding and evaluating free-text responses. This research presents an innovative model that combines the capabilities of the language model all-mpnet-base-v2 with a machine learning-based lenience adjustment mechanism to enhance the accuracy and fairness of automated grading systems. The proposed model utilizes all-mpnet-base-v2 for natural language understanding and feature extraction from student responses. To address the variability in acceptable answers and provide a fair grading system, a machine learning-based model is integrated to adjust the level of lenience dynamically. This dual approach ensures that the grading system can handle a wide range of responses while maintaining consistency and reliability. The experimental results demonstrate that the combination of all-mpnet-base-v2 with the lenience adjustment model significantly improves grading accuracy compared to traditional methods. This model represents a significant advancement in the field of educational technology, offering a robust solution for automated grading systems that can adapt to diverse educational contexts and requirements.

Keywords: All-Mpnet-Base-V2, Lenience, Convolutional Neural Networks, Pretrained Models.

DOI: https://doi.org/10.38124/ijisrt/IJISRT24JUL1710

PDF: https://ijirst.demo4.arinfotech.co/assets/upload/files/IJISRT24JUL1710.pdf

REFERENCES

  1. Baker, R., & Smith, L. (2019). Evaluating the fairness of automated grading systems: Bias, transparency, and explainability. Journal of Educational Technology Research, 35(2), 123-140.
  2. Burrows, S., Gurevych, I., & Stein, B. (2015). The eras and trends of automatic short answer grading. International Journal of Artificial Intelligence in Education,
  3. Chen, L., & He, Z. (2013). A machine learning based approach for automatic short answer grading. Proceedings of the 2013 International Conference on Artificial Intelligence, 534-539.
  4. Gao, Y., & Zhu, J. (2021). Enhancing short answer grading with transformers and knowledge distillation. IEEE Transactions on Learning Technologies
  5. Hijikata, Y., & Matsushita, K. (2017). A survey of natural language processing techniques for automatic short answer grading. Journal of Information Processing
  6. Ichikawa, H., Fujii, H., & Tokunaga, T. (2020). Estimating justification cues in student answers using BERT for automatic grading. Proceedings of the Annual Meeting of the Association for Computational Linguistics
  7. Liu, H., Luo, C., & Zhu, Y. (2019). Multiway attention networks for automatic grading of student essays.IEEE Transactions on Learning Technologies
  8. Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence Unleashed: An argument for AI in Education. Pearson Education
  9. Magooda, A., Farag, M., & Hussein, M. (2019). Automatic short answer grading using semantic similarity measures. Computers & Education
  10. Mohler, M., Bunescu, R., & Mihalcea, R. (2011). Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies
  11. Moore, S., Nguyen, H. A., Chen, T., & Stamper, J. (2023). Assessing the quality of multiple-choice questions using GPT-4 and rule-based methods. In European Conference on Technology Enhanced Learning
  12. Nielsen, R. D., Ward, W., & Martin, J. H. (2008). Annotating students' understanding of science concepts. Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC
  13. Phandi, P., Chai, K. M. A., & Ng, H. T. (2015). Flexible domain adaptation for automated essay scoring using correlated linear regression. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
  14. Pulman, S. G., & Sukkarieh, J. Z. (2005). Automatic short answer marking. Proceedings of the Second Workshop on Building Educational Applications Using NLP
  15. Ramanathan, V., & Di Eugenio, B. (2014). Lightly supervised learning of procedural dialogue systems. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing
  16. Riordan, B., & Klein, D. (2014). Unsupervised system for short answer grading using clustering. Proceedings of the 9th Workshop on Innovative Use of NLP for Building Educational Applications
  17. Roy, D., & Roy, K. (2021). Short answer grading using machine learning: A survey.
  18. Tan, H., Wang, C., Duan, Q., Lu, Y., Zhang, H., & Li, R. (2020). Automatic short answer grading by encoding student responses via a graph convolutional network. Interactive Learning Environments
  19. van der Waa, J., Nieuwburg, E., Cremers, A., & Neerincx, M. (2021). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence
  20. Weston, J., & Hermann, K. M. (2015). Artificial Intelligence: Deep learning for answering
  21. Zhang, Z., & VanLehn, K. (2016). Using learning technologies to support computer-based grading of student work. Journal of Educational Computing Research
  22. Zhou, G., & Yang, M. (2017). Automatic short answer grading via multi-layered semantic matching. EEE Transactions on Knowledge and Data Engineering
  23. Ramachandran, G., & Chakrabarti, A. (2020). Hybrid models for automatic short answer grading using NLP and deep learning. Journal of Educational Technology Development and Exchange (JETDE)
  24. Saha, A., & Dey, L. (2012). Automatic grading of short descriptive answers in medical domain. Proceedings of the 13th International Conference on Intelligent Text Processing and Computational Linguistics
  25. Silvestri, G., & Ferilli, S. (2013). Automatic grading of short student answers by semi-supervised short text clustering. Journal of Computing and Information Technology
  26. Raman, M., & Yadav, N. (2022). Using machine learning for automated assessment of short-answer questions
  27. Farag, M., & Younis, M. (2018). Neural network-based methods for short answer grading: A survey. Information Processing & Management
  28. Mutlu, E., & Aleven, V. (2012). Enhancing automated essay scoring with discourse structure and sentence specificity features. Journal of Educational Computing Research
  29. Flor, M., & Futagi, Y. (2012). Automatic detection of preposition and determiner errors in ESL writing. Journal of Educational Computing Research
  30. Yaneva, V., & Temnikova, I. (2017). Evaluating the readability of automatic short answer grading: A comparative study. Journal of Computing and Information Technology
  31. Dascalu, M., & Trausan-Matu, S. (2014). Automatic feedback for improving student writing skills using linguistic features. Journal of Educational Technology & Society
  32. Gao, Y., & Zhu, J. (2021). Enhancing short answer grading with transformers and knowledge distillation. IEEE Transactions on Learning Technologies,
  33. Hijikata, Y., & Matsushita, K. (2017). A survey of natural language processing techniques for automatic short answer grading. Journal of Information Processing,
  34. Ichikawa, H., Fujii, H., & Tokunaga, T. (2020). Estimating justification cues in student answers using BERT for automatic grading. Proceedings of the Annual Meeting of the Association for Computational Linguistics, 48-55.
  35. Liu, H., Luo, C., & Zhu, Y. (2019). Multiway attention networks for automatic grading of student essays. IEEE Transactions on Learning Technologies,
  36. Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence Unleashed: An argument for AI in Education. Pearson Education.
  37. Magooda, A., Farag, M., & Hussein, M. (2019). Automatic short answer grading using semantic similarity measures. Computers & Education, 129, 234-245.