Automated Essay Grading System with Automated Generated Feedbacks

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Providing constructive essay feedback is a crucial yet time-consuming task, particularly in preparation for the Malaysian University English Test (MUET). This study proposes an automated essay grading system (AEGS) that leverages natural language processing (NLP) and artificial intelligence (AI) technologies to assess essays based on MUET rubrics to assist language teachers in grading essays and more importantly, to provide essays feedback efficiently as assist students' learning. The system was developed using generative pre-trained transformers (GPT)-4o OpenAI application programming interface, focusing on key features such as automated grading, detailed feedback generation, and userfriendly user interface design. AEGS extracts content from typed or scanned essays using Optical Character Recognition (OCR) and evaluates submissions via AI-powered analysis. The difference between the grade generated using MUET rubrics and the those marked by the language teachers is +-5 mark (an acceptable range). As proof of concept of this pilot study, an interview with a MUET language teacher and User Acceptance Testing (UAT) was conducted, the functional testing and user feedback indicate improved grading efficiency, consistency, and rubric-aligned feedback delivery. Most AEGS do not provide feedback but our proposed system not only grade but also provide essay writing feedback that the teachers can adjust accordingly. The feedback from the language teacher stated the usefulness of AI-generated essays feedback indicates the potential to reduce teachers' workload, improve overall feedback quality, and the potential to scale for wider academic use.

Description

Keywords

Citation

Endorsement

Review

Supplemented By

Referenced By