COAS
Center for Open Access in Science (COAS)
OPEN JOURNAL FOR INFORMATION TECHNOLOGY (OJIT)

ISSN (Online) 2620-0627 * ojit@centerprode.com

OJIT Home

2023 - Volume 6 - Number 2


Automated Assessment System Using Machine Learning Libraries

Adebola Victor Omopariola * ORCID: 0000-0002-1234-3911
Veritas University, Department of Computer and Information Technology, Abuja, NIGERIA

Chukwudi Nnanna Ogbonna * ORCID: 0000-0002-8655-5001
Veritas University, Department of Computer and Information Technology, Abuja, NIGERIA

Felix Uloko
Veritas University, Department of Computer and Information Technology, Abuja, NIGERIA

Monday J. Abdullahi * ORCID: 0000-0002-3596-4550
Air Force Institute of Technology, Department of Computer Science, Kaduna, NIGERIA

Open Journal for Information Technology, 2023, 6(2), 97-122 * https://doi.org/10.32591/coas.ojit.0602.02097o
Received: 7 January 2023 ▪ Revised: 23 July 2023 ▪ Accepted: 25 October 2023

LICENCE: Creative Commons Attribution 4.0 International License.

ARTICLE (Full Text - PDF)


ABSTRACT:
Assessment and the grading of students is a task that has been done for as long as school has existed. This was previously done by teachers in primary and secondary, lecturers for institutions like JAMB and lecturers in schools. Up until now students’ marks were influenced by other external factors such as bad handwriting, lengthy paragraphs, roundabout way of speaking rather than going straight to the point and the sheer number of assignments the lecturer has to mark. This has resulted in students getting lower or higher marks than they should be awarded. This project is to create an ML (Machine Learning) powered assessment system that will take the assignment questions and the marking scheme and award the student the marks similar to what the ideal lecturer would have given. This will also reduce the time the lecturers spend on marking and ensure the students get their results on time. This project will be made with Python and machine learning and will be tested with a number of potential answers to the questions and their grading’s. This will enable system to be able to grade assignments as soon as they are uploaded. This research will be limited by the fact that the system can only handle the marking of short sentences accurately and not long paragraphs. The system is also limited by the fact that it can only mark with the aid of the marking scheme and not without it so it is not a truly intelligent model in that regard. The research showed that the system is indeed capable of obtaining the similarity between two paragraphed answers provided but it needs extras to produce the most accurate results.

KEY WORDS: assessment, automated assessment system, machine learning library.

CORRESPONDING AUTHOR:
Adebola Victor Omopariola, Veritas University, Department of Computer and Information Technology, Abuja, NIGERIA.


REFERENCES:

Adams, W. L. (1932). Why teachers say they fail pupils. Educational Administration and Supervision, XVIII, 594-600.

Ana, P., & Tawo Bukie, P. (2013). Design and implementation of online examination administration system for universities. https://www.researchgate.net/publication/327075106.

Archer, A., & B. McCarthy (1988). Personal biases in student assessment. https://doi.org/10.1080/0013188880300208

Biggs, J., & Collis, K. (1982). Evaluating the quality of learning: The SOLO taxonomy. New York: Academic Press.

Bloxham, S. (2009). Marking and moderation in the UK: false assumptions and wasted resources. https://doi.org/10.1080/02602930801955978

Boussakuk, M., Bouchboua, A., El Ghazi, M., & El Bekkali, M. (2021). Designing and developing e-assessment delivery system under IMS QTI ver.2.2 specification.

Dikli, S. (2006). Automated essay scoring. Turkish Online Journal of Distance Education-TOJDE, 7(1), Article: 5.

Elliot, S. (2000a). A study of expert scoring and IntelliMetric scoring accuracy for dimensional scoring of grade 11 student writing responses (RB-397). Newtown, PA: Vantage Learning.

Elliot, S. (2000b). A true score study of IntelliMetric accuracy for holistic and dimensional scoring of college entry-level writing program (RB-407). Newtown, PA: Vantage Learning.

Elliot, S. (2001a). About IntelliMetric (PB-540). Newtown, PA: Vantage Learning.

Elliot, S. (2001c). Applying IntelliMetric Technology to the scoring of 3rd and 8th grade standardized writing assessments (RB-524). Newtown, PA: Vantage Learning. 61.

Elliot, S. (2002). A study of expert scoring, standard human scoring and IntelliMetric scoring accuracy for statewide eighth grade writing responses (RB-726). Newtown, PA: Vantage Learning.

Elliot, S. (2003a). A true score study of 11th grade student writing responses using IntelliMetric Version 9.0 (RB-786). Newtown, PA: Vantage Learning.

Elliot, S. (2003b). Assessing the accuracy of IntelliMetric for scoring a district- wide writing assessment (RB-806). Newtown, PA: Vantage Learning.

Elliot, S. (2003c). How does IntelliMetric score essay responses? (RB-929). Newtown, PA: Vantage Learning.

Elliot, S. (2003d). IntelliMetric: From here to validity. In M. D. Shermis & J. C. Burstein (Eds.). Automated essay scoring: A cross disciplinary approach. Mahwah, NJ: Lawrence Erlbaum Associates.

Fisher, M. R. Jr. (2022). Student assessment in teaching and learning. https://cft.vanderbilt.edu/student-assessment-in-teaching-and-learning/.

Hill, P., & Barber, M. (2014). Preparing for a renaissance in assessment. London: Pearson Education.

James, D. A., & Gabunada Lambating, J. (2001). Validity and Reliability in Assessment and Grading: Perspectives of preservice and in-service teachers and teacher education professors. Conference of the American Educational Research Association (Seattle, WA, April 10-14, 2001).

Kuisma, R. (1999). Criteria referenced marking of written assignments.  https://doi.org/10.1080/0260293990240103

Kukich, K. (2006). Beyond automated essay scoring. The Turkish Online Journal of Distance Education. https://www.semanticscholar.org/paper/Beyond-Automated-Essay-Scoring-Kukich/66a03e431c858ed3dd00d25773e3a2c9b5528e6c.

Luckin, R. (2017). Towards artificial intelligence-based assessment systems. http://dx.doi.org/10.1038/s41562-016-0028

Luckin, R., & Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education, London.

McKinstry, B., Cameron, H., Elton, R., & Riley, S. (2004). Leniency and halo effects in marking undergraduate short research projects. BMC Medical Education, 4, 28-28.

Nottingham, M. (1988). Grading practices – Watching out for land mines. NASSP.

Schinske, J., & Tanner, K. (2014). Teaching more by grading less (or differently). CBE—Life Sciences Education, 13, 159-166.

Tomkinson, B., & Freeman, J. (2011). Problems of assessment. Conference: International Conference on Engineering Education (ICEE-2011). Belfast, Northern Ireland. https://www.researchgate.net/publication/273455702_Problems_of_Assessment.

Willey, K., & Gardner, A. (2010). Improving the standard and consistency of multi-tutor grading in large classes. https://www.uts.edu.au/sites/default/files/Willey.pdf.

Wilson, M., & Sloane, K. (2010). From principles to practice: An embedded assessment system. Applied Measurement in Education, 13,118-201. https://doi.org/10.1207/S15324818AME1302_4

 

© Center for Open Access in Science