Natural Language Processing (NLP)-Driven Subjective Answer Ranking
Nikitha A1, Dr Aparna K 2
1 Student, Department of Master of Computer Application, BMS Institute of Technology and Management, Bengaluru, Karnataka
2 Associate Professor, Department of Master of Computer Application, BMS Institute of Technology and Management, Bengaluru, Karnataka
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Exams for universities and boards are administered offline each year. Many students show up for subjective exams. It took a lot of work to manually evaluate such a big number of papers. The evaluation's quality can occasionally fluctuate depending on the evaluator's attitude. The evaluation process takes a lot of time and effort. Objective or multiple choice questions are commonly found in competitive and entrance tests. These tests are reviewed using a machine because that is how they were administered, making evaluation simple. The manual evaluation of subjective papers is a difficult and taxing undertaking. A major obstacle when utilizing artificial intelligence (AI) to analyze subjective articles is a lack of understanding and acceptance of the findings. There have been numerous attempts to use computer science to evaluate student responses. To accomplish this objective, the majority of the effort, however, needs standard counts or precise terms. There are also not enough carefully selected data sets. In order to evaluate descriptive responses automatically, this paper proposes a novel approach that makes use of various machine learning, natural language processing, and tools like WorldNet, Word2vec, word mover's distance (WMD), cosine similarity, Multinomial Naive Bayes (MNB), and term frequency-inverse document frequency (TF-IDF). Responses are assessed using solution statements and keywords, and a machine learning model is built to forecast the grades of responses. Overall, the results indicate that WMD outperforms cosine similarity. The machine learning model could also be employed independently with appropriate training. Without the MNB model, experimentation produces an accuracy of 88%. Using MNB, the error rate is further decreased by 1.3%.
Keywords - Subjective answer evaluation, big data, machine learning, natural language processing, word2ve and WorldNet.