SlideShare a Scribd company logo
ISSN 2348-3156 (Print)
International Journal of Social Science and Humanities Research ISSN 2348-3164 (online)
Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com
Page | 274
Research Publish Journals
The Potentials of Automated Feedback Systems
for Instructors in Higher Education
Ammar Almutawa
PhD Candidate. School of Computer Science, College of Engineering and Physical Sciences, University of Guelph,
Canada
Abstract: Automated feedback systems have been associated with significant improvements in the outcomes for
learners in higher education. The advantages for learners are well reported, but surprisingly few articles have
investigated the advantages of automated feedback for instructors. This article reviews the use of feedback and
automated feedback systems in higher education to assist instructors to self-assess and to privately identify
potential improvements to their instructional practices. This article first describes how feedback is currently
provided in higher education settings and then discusses requirements, technology, and innovations needed to
create automated feedback systems for instructors. The proposed automated feedback system aims to assist
university instructors by providing suggestions and feedback that could help to self-examine their work privately
and immediately.
Keywords: Technologies Applied to Education, Feedback in Higher Education, Post-secondary Instructors,
Automated Feedback Systems, Performance, Evaluation and Assessment.
I. INTRODUCTION
Feedback can help people to exchange ideas and take actions using their knowledge, experiences, and the level of
understanding about a specific action or input (1; 2). Feedback can influence recipients to change their actions in response
to the feedback (3). Feedback can be defined as the information provided to someone about a certain action or
performance (3; 4; 5). Feedback is often generated by a human instructor or tutor; however, feedback can also be provided
automatically using computer systems (6; 3). Feedback is a powerful and effective tool to provide information to
educators and students (1; 2).
In higher education, providing immediate or delayed feedback can impact the learning process (7; 8; 9). Feedback is
mostly generated by a human instructor to assist students on their learning tasks (7). However, feedback can be also
provided automatically using computer systems. Computerized learning systems can provide automated feedback to help
students without human involvement (10; 11; 12; 13). Intelligent Tutoring Systems are an example of the computerized
learning systems which can help students to complete or improve their learning tasks automatically (14; 15; 1). These
automated feedback systems can facilitate students learning by providing feedback about their learning activities
automatically.
The goal of this review article is to study how feedback is delivered to postsecondary educators. It addresses the following
two research questions: “How is feedback given to post-secondary instructors?” and “Is there potential for automated
feedback to be helpful for post-secondary instructors?”. To accomplish these goals, we first explore definitions of
feedback in higher education as well as how feedback is provided to students and instructors. The types of feedback are
grouped into distinct categories each with advantages and disadvantages. Automated feedback systems and how they can
benefit learners and instructors are explored in the final review section. The article concludes with a research vision of
viable automated feedback systems to benefit post-secondary instructors.
ISSN 2348-3156 (Print)
International Journal of Social Science and Humanities Research ISSN 2348-3164 (online)
Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com
Page | 275
Research Publish Journals
II. FEEDBACK IN HIGHER EDUCATION
In post-secondary education, feedback is important for educators, learners, researchers, and administrators (5; 14; 15).
Educators can provide feedback about students’ activities and learning processes so that learners can improve their
performances. Students can provide feedback to instructors using the end of the semester evaluations which can provide
information about course content as well as instructors’ performances (25; 26). Administrators can also use students
supplied feedback to make decisions about new educational initiatives, the effectiveness of prior decisions, evaluating a
specific program, or writing reports. (27; 28).
Feedback can be delayed or immediate that are provided to students and instructors. The sooner feedback is produced the
better results will be achieved (6). Immediate feedback can be used to improve educational processes than now often rely
on delayed feedback (3; 6). In particular, the processes of providing feedback to instructors may be greatly improved if
immediate feedback processes are employed Automatically.
Automated feedback systems refer to any system that uses some data to generate information intended to be delivered to a
specific user (23; 29). Automated feedback can also be defined as feedback provided right after a specific task or action is
done or after automated feedback systems produced outcomes about specific inputs (23; 27; 6). In higher education,
providing automated feedback can assist recipients to make better actions or changes about their learning tasks and
produce more quality outcomes (7; 3; 6). For example, using grammar checker systems can assist students to receive
automated feedback about their writing. This feedback could help to correct grammar and spelling mistakes without
human involvement. Also, it can help to change particular words and select better choices that can enhance students’
writing tasks.
Feedback for Students
Feedback is an essential part of the teaching and learning process that can help students in their educational activities (30;
7; 15). The feedback provided to students can produce comments and useful information that show improvement and
achievement. Those feedback can help to measure students’ works by comparing the current instructional situation with
desired objectives in that specific instructional environment (14; 30; 5). This feedback is mostly used to provide
information to learners and their abilities about some coursework activities and assignments’ results (7; 3). The feedback
is generally occurred after students finish their educational tasks whether the feedback is delayed or immediate (4; 15).
Delayed Feedback for Students
Students can receive feedback that is generally provided by instructors about their learning tasks (26; 5). The normal
interaction-feedback cycle happens when an instructor asks students to work on a task and waits for them to complete it.
Then, the instructor evaluates the work the students have done and provide feedback about students’ outcomes (30; 5).
Providing feedback to students can help address what issues, mistakes, or what parts need to improve (27; 18; 29). For
example, when an instructor assigns some homework to students, they can work on that homework until they submit it.
The instructor will review each student’s work. Then, the instructor can provide feedback to students about their
homework results.
Another way of providing delayed feedback to students is using peers’ feedback (26; 5). Classmates can help each other
to work on learning activities and then provide feedback to each other (31; 32). Students can evaluate and provide
feedback to peers about different learning tasks they work on as teams or even about individual activities (33; 34; 35).
Students can follow predefined criteria provided by their instructor and use their knowledge and experience to evaluate
their group members and provide feedback (5; 31). Peers intend to provide useful feedback to their teammates so then can
make better outcomes, so the evaluation results are incorporated into the final grade (31).
Delayed feedback to students can also be provided using computer-aided systems (1; 28). There are many automated
feedback systems that produce feedback about learning tasks like sending students test away to be scored and waiting for
the results (1; 28; 36). The feedback will be in a form of students’ grades that will be given to students sometimes in the
future. In this article we are only considering automated feedback systems that provide immediate feedback.
Immediate Feedback for Students
Immediate feedback can be provided by instructors and automated feedback systems (16; 13). Instructors can provide
immediate feedback about students’ activities by watching students working on their learning tasks and then providing
feedback (3). Automated feedback systems can provide immediate help to students in more convenient time without the
ISSN 2348-3156 (Print)
International Journal of Social Science and Humanities Research ISSN 2348-3164 (online)
Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com
Page | 276
Research Publish Journals
need to wait for instructors to watch them working on that learning tasks (6; 36; 37). These automated systems can simply
provide feedback to a specific task like assignment confirmation after students’ submission or what grade they make (26;
13). Also, those automated feedback systems can provide complex information with more details including real-time
feedback that can provide auto corrector and steps to improve their works (7; 14).
Automated feedback can improve students’ performance by motivating them to be engaged in the learning process (3; 4;
38). Students can work on different learning activities and receive timely feedback after each step (7; 16; 39). Those steps
can assist students to improve their learning and make better results. There are many automated systems and online
educational platforms that can help students to receive feedback immediately (39; 30). Intelligent Tutoring Systems (ITS)
and Massive Open Online Courses (MOOCs) are commonly used for supporting students and to automatically evaluate
their assignments and provide feedback to their activities (40; 41; 42; 13). Grade-Mark is another online marking modules
that can review the students’ work to show any plagiarism detection. The quality and accuracy of those automated
feedback are high quality and have almost the same positive results as human feedback (26; 16; 14).
Feedback for Instructors
Feedback provided to instructors is usually described as information about students such as class averages or drop fail
rates (43; 44). Instructors mostly receive objective feedback about students’ performance and class learning activities (43;
45). Feedback can also provide information to instructors that can make them aware about learning process like what
issues related to class activities and if there are any missing pieces in class materials (23; 3; 18). This feedback can help
instructors to take the appropriate proactive actions about specific situations such as to provide more help to students who
are not doing well (44; 30).
In general, feedback delivered to instructors can be a type of subjective evaluations to provide information that may help
instructors in the future (46; 43). The feedback can be a summary report after the class concluded that show students’
performance, success rate, grades, etc. Students can use end of semester evaluations to provide feedback and evaluate
instructors (47; 46). The outcomes of those evaluations can be used by experts, administrators, or experienced instructors
to assess and provide feedback to the class instructor (47; 30). Those feedback intend to produce information to help
instructors to make any changes they need regarding the learning process.
Delayed Feedback for Instructors
Delayed feedback can be produced to instructors by peers or colleagues, administrators, students, and computer systems
(7; 29; 48). Peers and colleagues can use observation methods to assess instructors and then provide feedback using their
experiences, knowledge, and skills (29; 32; 43). They can also use video recordings and review learning activities to
provide late feedback to instructors (7; 31; 49). Students can evaluate and provide feedback to instructors using evaluation
forms at the end of the instructional units (46; 24). Experts and administrators can use those evaluations and students’
feedback to be used to assess and provide late feedback to instructors (14; 33). Instructors can also use computer systems
that provide information about the learning process like students’ grading, weekly summary, failing rates, etc. (29; 7).
Using evaluation forms that happen at the end of each semester is an important way to understand the required
improvements (38; 50; 2). Instructors typically receive late feedback from students using end of the semester evaluations
(14; 46; 15). It is the most common but often ineffective mechanism for providing feedback to instructors (51; 52). The
late feedback results from this method can help instructors to compare the actual learning outcomes with the designed
objectives (26; 50). The feedback is usually a written subjective report that shows what areas where instructors are lacking
or need improvements. However, this feedback can inform and support instructors after evaluating course materials to
help to improve students’ learning activities and outcomes (14; 15; 7).
Automated feedback systems can also provide delayed feedback to instructors (26; 7; 29). Those computer systems can
show students’ performance, timelines, class summary, etc. (21; 20; 1) For example, instructors can use computer systems
to check the total number of students who submitted their assignment on time, late submission, and who did not submit.
Instructors can benefit from using those automated systems to save time checking students’ works. For instance,
instructors can use Turnitin to check if students have any plagiarism or academic misconduct issues. Turnitin is a common
automated feedback systems that can check if students’ works are original and highlight any potential plagiarism issues
(26; 14). The feedback coming out of Turnitin is a type of late report that instructors can use to evaluate students and
provide the needed feedback.
ISSN 2348-3156 (Print)
International Journal of Social Science and Humanities Research ISSN 2348-3164 (online)
Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com
Page | 277
Research Publish Journals
Immediate Feedback for Instructors
There are few studies that investigated providing immediate feedback to instructors (14; 15; 1). Immediate feedback is
produced automatically right after finishing the learning task (6; 1; 30). Most of those automated feedback systems and
online systems can provide feedback about students’ learning activities (13; 42). Instructors may benefit from using the
feedback outcomes to assist them to have a clear idea about students’ performance. This can assist instructors directly to
track students’ activities and mistakes and then provide the needed help to address those issues. For example, when using
online learning systems, instructors can receive immediate feedback from students about whether they understand the
material or not. Then, the instructor can provide the suitable help to students.
III. DISCUSSION
Providing feedback and automated feedback can offer advantages to students and instructors in post-secondary education
(16; 14; 26). The feedback can be delayed and immediate depending on the recipients’ needs. The feedback will include
an action, type of delivery, and components of feedback. Timely feedback can provide actionable information to higher
education students and instructors. However, most of the existed methods of providing human and automated feedback
aim to serve students in one way or another. The outcomes from the computerized learning systems like auto-graders,
grammar checkers, etc. can facilitate learning tasks and provide useful information to students (26; 20; 23).
Research to explore automated feedback systems for instructors’ advantage has not received the same amount of attention.
This might be a result of the complicated and unclear ways to evaluate instructors’ performances and the possibility that
automated feedback for instructors could be used for evaluation by institutions instead of for information to be used by the
individuals. Students can be easily evaluated using their learning activities outcomes to be compared with predefined
standards or rules. This way will help to provide the required feedback whether the feedback is delayed or immediate.
However, in the instructor’s case, it is not an easy process to set standards and rules for instructors to follow when
working in their learning activities. This makes it harder to design an automated feedback system that provide feedback
for instructors’ benefits.
Using the end of semester evaluations has been used for decades to evaluate and provide feedback about class instructors
and course materials. This method has shown good advantages in higher education especially providing late feedback for
instructors. However, this method requires time, knowledge, and skills from the evaluators (43; 7; 24). Also, there are few
issues related to the end of the semester evaluation such as students’ honesty and how effectively those evaluations could
measure the quality of courses. Those kinds of evaluations are only beneficial if students are honest when they do the
evaluations and student honesty cannot easily be measured (53; 51). Also, the end of the semester evaluations is a type of
summative and subjective measurement instead of formative and ongoing fair evaluations (52). In fact, providing
feedback for instructors is a very complex process (51; 53; 52).
In higher education, instructors are normally measured by their scholarly outputs because it is quantifiable but there is no
complete way that can measure instructors’ success (51). Researchers claimed that providing immediate feedback to
instructors is a new and interesting area of research (47; 46; 42). They mentioned that providing feedback to support
instructors’ needs is a good way to improve course contents and instructional approaches that will result to enhance the
learning process (28; 54; 39). Other studies claimed that using feedback can help to improve faculty performance, and
they showed that feedback analysis is important to help educators to make better developments in classes (27; 19).
Designing automated feedback systems for serving instructors can be a challenge and a complex task because of the
different types of feedback and objectives (1; 29). The feedback is mostly used to identify mistakes which can help
students to realize their errors, but instructors cannot easily adapt this way of feedback to their own advantages (20). This
is because feedback provided to students can be evaluated and examined easier using their outcomes (55). Evaluating
instructors’ feedback is a challenging task. Studies showed that instructors’ performance can be evaluated using
examiners, peers’ reviews, or self-assessment (55). However, there are few studies investigated how instructors’
performance could be assessed. Most advantages of using automated feedback for instructors have been mostly drawn
from a limited number of studies focusing only on online courses’ platforms (5; 7; 51).
IV. THE POTENTIALS OF AUTOMATED FEEDBACK SYSTEMS FOR INSTRUCTORS
Providing instructors with timely feedback can allow them to make changes to their practices. This immediate feedback
can also facilitate the process of self-improvement and instructional change. The feedback produced automatically could
show information to instructors about specific learning activities. For example, when instructors receive many emails
ISSN 2348-3156 (Print)
International Journal of Social Science and Humanities Research ISSN 2348-3164 (online)
Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com
Page | 278
Research Publish Journals
from students, automated feedback systems could help to detect students’ common issues using the questions that students
ask and then provide a summary to the class instructor about students who need help and what concerns they have. The
feedback could help instructors to be aware of what they need to address and give more concerns without any human
involvement. Also, the feedback would be in a timely manner so there is no need to wait until other people provide those
feedback.
Developing automated feedback systems that aim to provide immediate feedback to instructors could be done using data
mining applied to instructional outputs (38; 56; 45). There is a lot of data available to help instructors to understand how
their courses are going including course planning documents, grades, emails, discussion forums, assignment grading
notes, quiz scores, lecture attendance, etc. Data mining can use those information resource to provide immediate feedback
to instructors (28; 47; 54). Data mining tools and techniques can provide and predict information like detecting students
who need help, preventing students from dropping out, predict learning performance, etc. (57; 58; 59). In addition, data
mining techniques can be used to help discovering useful information about formative and summative evaluations and can
also assist people to evaluate themselves (59; 21).
Automated feedback systems can offer many benefits to instructors by producing feedback right after they finish their
learning tasks. The automated feedback system can use instructional outcomes like course preparation, course outlines,
course websites, assignments, lab descriptions, etc. as the data inputs to be processed and then provide feedback to
instructors as the system outputs as shown in Figure 1. The Automated feedback systems for instructors could provide
immediate feedback to instructors about teaching methods, skills, missing requirements, summary reports, and self-
assessment.
Figure 1: Automated Feedback System
This article shows the possibilities of identifying a framework that can provide automated and immediate feedback to
university instructors. The framework aims to help instructors to self-assess some parts of the learning process by using
data mining techniques applied to various post-secondary instructions’ outputs. These instructions outcomes like course
materials, chat logs, student questions, emails, assignment descriptions, etc. can be used to be the inputs for the
framework to be processed automatically and then produce useful outputs. The feedback resulting from the framework
could assist instructors to raise awareness of what part are missing of the learning process. Also, it could facilitate
instructors to self-assess their performance immediately. Measuring the outcomes of the framework could be done by
comparing the automated feedback outputs with human feedback to ensure accuracy and quality of the framework.
V. CONCLUSION
In conclusion, this article describes how feedback is produced in higher education to learners and educators. The feedback
provided to students are mainly generated by instructors or automated feedback systems. The feedback provided to
instructors are generated by students, peers, and administrators. Review of existing literature showed that limited works
have been done regarding the automated feedback to instructors. Instead, many studies showed how instructors receive
human or delayed feedback using students’ evaluations, self-evaluations, peers’ observations, videotape recording and
reviewing, and staff supervisor consultation.
ISSN 2348-3156 (Print)
International Journal of Social Science and Humanities Research ISSN 2348-3164 (online)
Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com
Page | 279
Research Publish Journals
Feedback provided to instructors is often delayed which prevents them from acting on the feedback in a timely fashion. In
the big data era, there are some disadvantages of using delayed or human feedback. So, there is a need to explore a faster
and effective way of providing feedback. Using automated feedback systems might be a solution to provide timely
suggestions and useful information that could help instructors in a timely manner. Universities instructors will get good
advantages from exploring and adopting more automated feedback systems.
We propose a framework that can provide feedback to instructors automatically. The automated feedback systems for
instructors are possible when the required data, measurements, standards, and evaluation tools are identified and
understood. Those findings will help to improve the overall learning processes thus improving the educational objectives
and quality of learning and outcomes.
REFERENCES
[1] T. Bimba, N. Idris, A. Al-Hunaiyyan, R. B. Mahmud, and N. L. B. M. Shuib, “Adaptive feedback in computer-based
learning environments: a review,” Adaptive Behavior, vol. 25, pp. 217–234, Oct. 2017.
[2] J. Busler, C. Kirk, J. Keeley, and W. Buskist, “What Constitutes Poor Teaching? A Preliminary Inquiry Into the
Misbehaviors of Not-So-Good Instructors,” Teaching of Psychology, vol. 44, pp. 330–334, Oct. 2017.
[3] J. Hattie and H. Timperley, “The Power of Feedback,” Review of Educational Research, vol. 77, pp. 81–112, Mar.
2007.
[4] R. Rodgers, “Attending to Student Voice: The Impact of Descriptive Feedback on Learning and Teaching,”
Curriculum Inquiry, vol. 36, pp. 209–237, Jan. 2006.
[5] L.-T. Chen and L. Liu, “Instructor’s Self-Assessment of Content Design in Online Courses,” International Journal of
Technology in Teaching and learning, p. 19, 2018.
[6] G. Deeva, D. Bogdanova, E. Serral, M. Snoeck, and J. De Weerdt, “A review of automated feedback systems for
learners: Classification framework, challenges and opportunities,” Computers & Education, vol. 162, p. 104094,
Mar. 2021.
[7] G. Cheng, S.-M. G. Chwo, J. Chen, D. Foung, V. Lam, and M. Tom, “Automatic Classification of Teacher Feedback
and Its Potential Applications for EFL Writing,” International Conference on Computers in Education, p. 6, 2017.
[8] M. T. Orr, L. Hollingworth, and J. Cook, “Embedding Performance Assessments for Leaders into Preparation: A
Comparison of Approaches, Candidates, and Assessment Evidence,” Journal of School Leadership, vol. 28, pp. 294–
314, May 2018.
[9] O. Ku, J.-K. Liang, S.-B. Chang, and M. Wu, “Sokrates Teaching Analytics System (STAS): An Automatic
Teaching Behavior Analysis System for Facilitating Teacher Professional Development,” h International Conference
on Computers in Education, p. 10, 2018.
[10] N. Augar, C. J. Woodley, D. Whitefield, and M. Winchester, “Exploring academics’ approaches to managing team
assessment,” International Journal of Educational Management, vol. 30, pp. 1150–1162, Aug. 2016.
[11] J. McCuaig and J. Baldwin, “Identifying Successful Learners from Interaction Behaviour,” International Conference
on Educational Data Mining, p. 4, 2012.
[12] H. Keuning, J. Jeuring, and B. Heeren, “Towards a Systematic Review of Automated Feedback Generation for
Programming Exercises – Extended Version,” ACM Conference, p. 20, 2016.
[13] A. Agrawal, J. Venkatraman, and S. Leonard, “YouEDU: Addressing Confusion in MOOC Discussion Forums by
Recommending Instructional Video Clips,” International Conference on Educational Data Mining, p. 8, 2015.
[14] S. Burrows and M. Shortis, “An evaluation of semi-automated, collaborative marking and feedback systems:
Academic staff perspectives,” Australasian Journal of Educational Technology, vol. 27, Nov. 2011.
[15] J. Ranalli, S. Link, and E. Chukharev-Hudilainen, “Automated writing evaluation for formative assessment of
second language writing: investigating the accuracy and usefulness of feedback as part of argument-based
validation,” Educational Psychology, vol. 37, pp. 8–25, Jan. 2017.
ISSN 2348-3156 (Print)
International Journal of Social Science and Humanities Research ISSN 2348-3164 (online)
Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com
Page | 280
Research Publish Journals
[16] T. Price, R. Zhi, and T. Barnes, “Evaluation of a Data-driven Feedback Algorithm for Open-ended Programming,”
International Conference on Educational Data Mining, p. 6, 2017.
[17] M. Kumar, M.-Y. Kan, B. C. Y. Tan, and K. Ragupathi, “Learning Instructor Intervention from MOOC Forums:
Early Results and Issues,” International Conference on Educational Data Mining, p. 8, 2015.
[18] V. J. Shute, “Focus on Formative Feedback,” Review of Educational Research, vol. 78, pp. 153–189, Mar. 2008.
[19] A. Shibani, S. Knight, and S. Buckingham Shum, “Educator perspectives on learning analytics in classroom
practice,” The Internet and Higher Education, vol. 46, p. 100730, July 2020.
[20] H. Keuning, J. Jeuring, and B. Heeren, “A Systematic Literature Review of Automated Feedback Generation for
Programming Exercises,” ACM Transactions on Computing Education, vol. 19, pp. 1–43, Jan. 2019.
[21] Y. Xiong and Y.-F. B. Wu, “Understanding Students’ Perceptions of an Automated Feedback System: an Empirical
Study Based on UTAUT,” Americas Conference on Information Systems, p. 10, 2019.
[22] I. Koprinska, J. Stretton, and K. Yacef, “Students at Risk: Detection and Remediation,” International Conference on
Educational Data Mining, p. 4, 2015.
[23] L. Gusukuma, A. C. Bart, and D. Kafura, “Pedal: An Infrastructure for Automated Feedback Systems,” in
Proceedings of the 51st ACM Technical Symposium on Computer Science Education, (Portland OR USA), pp.
1061– 1067, ACM, Feb. 2020.
[24] A. Wang, S. Yu, L. Chen, X. Gao, and D. Wang, “Development of a Visualization based System for Analyzing
Teachers’ Emotional Experience in Classroom Observation Activities,” International Conference on Computers in
Education, p. 6, 2018.
[25] Z. L. B. Karen D. Mattingly, Margaret C. Rice, “Learning analytics as a tool for closing the assessment loop in
higher education,” Knowledge Management & E-Learning: An International Journal, pp. 236–247, Sept. 2012.
[26] V. Clark-Gordon, N. D. Bowman, A. A. Hadden, and B. N. Frisby, “College instructors and the digital red pen: An
exploratory study of factors influencing the adoption and non-adoption of digital written feedback technologies,”
Computers & Education, vol. 128, pp. 414–426, Jan. 2019.
[27] R. Lalit, K. Handa, and N. Sharma, “FUZZY BASED AUTOMATED FEEDBACK COLLECTION AND
ANALYSIS SYSTEM,” Advances and Applications in Mathematical Sciences, vol. 18, no. 8, p. 10, 2019.
[28] A. Lachner, C. Burkhart, and M. Nu¨ckles, “Formative computer-based feedback in the university classroom:
Specific concept maps scaffold students’ writing,” Computers in Human Behavior, vol. 72, pp. 459–469, July 2017.
[29] N. Tanoue, L. N. Korovin, M. Carton, C. A. Galvani, and I. Ghaderi, “Faculty feedback versus residents’ self-
assessment of operative performance: Different but complementary,” The American Journal of Surgery, vol. 215,
pp. 288–292, Feb. 2018.
[30] J. Carvalho, “Ms thesis: The design of an educationally beneficial immediate feedback system,” MS thesis, p. 72,
2017.
[31] M. J. Dingel, W. Wei, and A. Huq, “Cooperative learning and peer evaluation: The effect of free riders on team
performance and the relationship between course performance and peer evaluation,” Journal of the Scholarship of
Teaching and Learning, vol. 13, no. 1, p. 12, 2013.
[32] J. S. Kane and E. E. L. Iii, “Methods of Peer Assessment,” p. 32, 1978.
[33] P. Bedore and B. O’Sullivan, “Addressing Instructor Ambivalence about Peer Review and Self-Assessment,”
Writing Program Administration - Journal of the Council of Writing Program Administrators, p. 27, 2011.
[34] B. M. Thompson, J. D. Gonzalo, and R. E. Levine, “The power of the written word: team assessment of behaviour,”
Medical Education, vol. 50, pp. 706– 708, July 2016.
[35] N. Falchikov and J. Goldfinch, “Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer
and Teacher Marks,” p. 36, 2000.
ISSN 2348-3156 (Print)
International Journal of Social Science and Humanities Research ISSN 2348-3164 (online)
Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com
Page | 281
Research Publish Journals
[36] S. Dikli and S. Bleyle, “Automated Essay Scoring feedback for second language writers: How does it compare to
instructor feedback?,” Assessing Writing, vol. 22, pp. 1–17, Oct. 2014.
[37] Y. Yang and L. F. Cornelious, “Preparing Instructors for Quality Online Instruction,” Online Journal of Distance
Learning Administration, p. 14, 2005.
[38] G. Riahi, “E-learning Systems Based on Cloud Computing: A Review,” Procedia Computer Science, vol. 62, pp.
352–359, 2015.
[39] Y. Akbulut and C. S. Cardak, “Adaptive educational hypermedia accommodating learning styles: A content analysis
of publications from 2000 to 2011,” Computers & Education, vol. 58, pp. 835–842, Feb. 2012.
[40] G. Kennedy, I. Ioannou, Y. Zhou, J. Bailey, and S. O’Leary, “Mining interactions in immersive learning
environments for real-time student feedback,” Australasian Journal of Educational Technology, vol. 29, May 2013.
[41] M. H. Qasem, R. Qaddoura, and B. Hammo, “Educational Data Mining (EDM): A Review,” Resrach gate:
Conference: New Trends in Information Technology - (NTIT), p. 9, 2017.
[42] F. Mi and B. Faltings, “Adaptive Sequential Recommendation for Discussion Forums on MOOCs using Context
Trees,” International Conference on Educational Data Mining, p. 8, 2017.
[43] A. E. Zacharoula Papamitsiou, “Learning Analytics and Educational Data Mining in Practice,” Educational
Technology and Society, p. 16, 2014.
[44] A. Dutt, M. A. Ismail, and T. Herawan, “A Systematic Review on Educational Data Mining,” IEEE Access, vol. 5,
pp. 15991–16005, 2017.
[45] J. Rusia, “DATA MINING TECHNIQUE ON SENTIMENT ANALYSIS AND COMPUTATION OF VIEWS,”
International Journal of Engineering Applied Sciences and Technology, 2017.
[46] LPU, Phagwara and A. Sharma, “A RESEARCH REVIEW ON COMPARATIVE ANALYSIS OF DATA MINING
TOOLS, TECHNIQUES AND PARAMETERS,” International Journal of Advanced Research in Computer Science,
vol. 8, pp. 523–529, Aug. 2017.
[47] Romero and S. Ventura, “Educational Data Mining: A Review of the State of the Art,” IEEE Transactions on
Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 40, pp. 601–618, Nov. 2010.
[48] M. Dingel and W. Wei, “Influences on peer evaluation in a group project: an exploration of leadership,
demographics and course performance,” Assessment & Evaluation in Higher Education, vol. 39, pp. 729–742, Aug.
2014.
[49] A. Planas-Llado´, L. Feliu, G. Arbat, J. Pujol, J. J. Sun˜ol, F. Castro, and C. Mart´ı, “An analysis of teamwork based
on self and peer evaluation in higher education,” Assessment & Evaluation in Higher Education, vol. 46, pp. 191–
207, Feb. 2021.
[50] A. M. Shahiri, W. Husain, and N. A. Rashid, “A Review on Predicting Student’s Performance Using Data Mining
Techniques,” Procedia Computer Science, vol. 72, pp. 414–422, 2015.
[51] T. T. York, C. Gibson, and S. Rankin, “Defining and Measuring Academic Success,” Practical Assessment Research
and Evaluation, 2015. Publisher: University of Massachusetts Amherst.
[52] K. Young, J. Joines, T. Standish, and V. Gallagher, “Student evaluations of teaching: the impact of faculty
procedures on response rates,” Assessment & Evaluation in Higher Education, vol. 44, pp. 37–49, Jan. 2019.
[53] L. McClain, A. Gulbis, and D. Hays, “Honesty on student evaluations of teaching: effectiveness, purpose, and timing
matter!,” Assessment & Evaluation in Higher Education, vol. 43, pp. 369–385, Apr. 2018.
[54] M. K. Kele¸s, “AN OVERVIEW: THE IMPACT OF DATA MINING APPLICATIONS ON VARIOUS
SECTORS,” TECHNICAL JOURNAL, p. 5, 2017.
ISSN 2348-3156 (Print)
International Journal of Social Science and Humanities Research ISSN 2348-3164 (online)
Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com
Page | 282
Research Publish Journals
[55] J. Kim and K. R. Lee, “Effects of an examiner’s positive and negative feedback on self-assessment of skill
performance, emotional response, and selfefficacy in Korea: a quasi-experimental study,” BMC Medical Education,
vol. 19, p. 142, Dec. 2019.
[56] O. A. Omitaomu, X. Li, and S. Zhou, “Optimization Based Data Mining Approach for Forecasting Real- Time
Energy Demand,” p. 11, 2015.
[57] S. Hussain, N. Abdulaziz Dahan, F. M. Ba-Alwi, and N. Ribata, “Educational Data Mining and Analysis of
Students’ Academic Performance Using WEKA,” Indonesian Journal of Electrical Engineering and Computer
Science, vol. 9, p. 447, Feb. 2018.
[58] C. Condon and M. Clifford, “Measuring Principle Performance,” Quality School LEADERSHIP, 2012.
[59] C. Romero and S. Ventura, “Educational data mining: A survey from 1995 to 2005,” Expert Systems with
Applications, vol. 33, pp. 135–146, July 2007.

More Related Content

PDF
STUDENTS’PATTERNS OF INTERACTION WITH A MATHEMATICS INTELLIGENT TUTOR:LEARNIN...
PDF
STUDENTS’PATTERNS OF INTERACTION WITH A MATHEMATICS INTELLIGENT TUTOR:LEARNIN...
PDF
Student's Patterns of Interaction with a Mathematics Intelligent Tutor: Learn...
PDF
Student's Patterns of Interaction with a Mathematics Intelligent Tutor: Learn...
PDF
STUDENTS’PATTERNS OF INTERACTION WITH A MATHEMATICS INTELLIGENT TUTOR:LEARNIN...
PDF
Online Student Feedback System
PDF
E-SUPPORTING PERFORMANCE STYLES BASED ON LEARNING ANALYTICS FOR DEVELOPMENT O...
PDF
IRJET- Online Examination System
STUDENTS’PATTERNS OF INTERACTION WITH A MATHEMATICS INTELLIGENT TUTOR:LEARNIN...
STUDENTS’PATTERNS OF INTERACTION WITH A MATHEMATICS INTELLIGENT TUTOR:LEARNIN...
Student's Patterns of Interaction with a Mathematics Intelligent Tutor: Learn...
Student's Patterns of Interaction with a Mathematics Intelligent Tutor: Learn...
STUDENTS’PATTERNS OF INTERACTION WITH A MATHEMATICS INTELLIGENT TUTOR:LEARNIN...
Online Student Feedback System
E-SUPPORTING PERFORMANCE STYLES BASED ON LEARNING ANALYTICS FOR DEVELOPMENT O...
IRJET- Online Examination System

Similar to The Potentials of Automated Feedback Systems for Instructors in Higher Education (20)

PDF
E-supporting Performance Styles based on Learning Analytics for Development o...
PDF
E-SUPPORTING PERFORMANCE STYLES BASED ON LEARNING ANALYTICS FOR DEVELOPMENT O...
PDF
CAREER GUIDANCE AND COUNSELING PORTAL FOR SENIOR SECONDARY SCHOOL STUDENTS
PDF
Implementation of different tutoring system to enhance student learning
PDF
The Role of Feedback in Continuous Improvement in Schools (www.kiu.ac.ug)
PDF
ICT in Assessment: A Backbone for Teaching and Learning Process
PDF
Enhancing Student Learning through Proactive Feedback Based Adaptive Teaching...
PDF
A Coursework Support System for Offering Challenges and Assistance by Analyzi...
PDF
ijrar_issue_20543888.pdf
PDF
Automated Essay Score Predictions As A Formative Assessment Tool
PDF
Advice For Action With Automatic Feedback Systems
PDF
Elective Subject Selection Recommender System
PDF
Study Support and Feedback System Using Natural Language Processing
PDF
PDF
IRJET- Institution Evaluation System
PDF
Applying Peer-Review For Programming Assignments
PDF
IRJET- Predicting Academic Performance based on Social Activities
PDF
Enhancing student learning through
PDF
PDF
Improving E-Learning by Integrating a Metacognitive Agent
E-supporting Performance Styles based on Learning Analytics for Development o...
E-SUPPORTING PERFORMANCE STYLES BASED ON LEARNING ANALYTICS FOR DEVELOPMENT O...
CAREER GUIDANCE AND COUNSELING PORTAL FOR SENIOR SECONDARY SCHOOL STUDENTS
Implementation of different tutoring system to enhance student learning
The Role of Feedback in Continuous Improvement in Schools (www.kiu.ac.ug)
ICT in Assessment: A Backbone for Teaching and Learning Process
Enhancing Student Learning through Proactive Feedback Based Adaptive Teaching...
A Coursework Support System for Offering Challenges and Assistance by Analyzi...
ijrar_issue_20543888.pdf
Automated Essay Score Predictions As A Formative Assessment Tool
Advice For Action With Automatic Feedback Systems
Elective Subject Selection Recommender System
Study Support and Feedback System Using Natural Language Processing
IRJET- Institution Evaluation System
Applying Peer-Review For Programming Assignments
IRJET- Predicting Academic Performance based on Social Activities
Enhancing student learning through
Improving E-Learning by Integrating a Metacognitive Agent
Ad

More from Research Publish Journals (Publisher) (20)

PDF
Differentiated Learning Exemplars for Students’ Academic Achievement in Engl...
PDF
Students’ Awareness of Cavite Agritourism: Basis for a Proposed Information ...
PDF
Combining the Best of Online and Face-to-Face Learning: Hybrid and Blended L...
PDF
The Impact of COVID-19 Crisis on Teaching and Learning English
PDF
Teachers’ Views on the Role of Non-verbal Communication in EFL Classrooms
PDF
CRITICAL THINKING AND ONLINE LEARNING DURING THE PANDEMIC
PDF
An Inquiry on the Awareness of Education Students about the Issues and Trend...
PDF
THE CHALLENGE OF USING BLENDED LEARNING IN THE TEACHING OF ENGLISH AS A FOR...
PDF
DELEGATION OF DUTIES AS A PRINCIPALS’ MOTIVATIONAL PRACTICE ON TEACHER RETE...
PDF
Impact of Covid-19 on Education: Challenges Faced By Students, Teachers and ...
PDF
IMPACT OF COVID-19 PANDEMIC ON FILIPINO PASSENGERS’ CRUISE INTENTION
PDF
Effect of Sport Injuries on the Level of Confidence and Anxiety among Athlet...
PDF
THE IMPACT OF TRAINING AND DEVELOPMENT ON EMPLOYEES PERFORMANCE IN AN ORGAN...
PDF
Impact of Schemes Designed for Women in Goa on their Self-Esteem and Psychol...
PDF
A comparative study between the new and old university laws in Saudi Arabia
PDF
THE ECONOMIC IMPLICATIONS OF HIGH DEBTS PROFILE TO A DEVELOPING NATION: EMP...
PDF
Teachers’ Level of Knowledge and Attitude, Stressors and Coping Mechanisms I...
PDF
A Stereotypical Framework of Pressure, Trauma and Relief in Chitra Banerjee ...
PDF
Modular and Recorded Video Lessons in Teaching Music, Arts, Physical Educati...
PDF
Impact of Workplace Bullying to Work Performance among Filipino Cruise Staff
Differentiated Learning Exemplars for Students’ Academic Achievement in Engl...
Students’ Awareness of Cavite Agritourism: Basis for a Proposed Information ...
Combining the Best of Online and Face-to-Face Learning: Hybrid and Blended L...
The Impact of COVID-19 Crisis on Teaching and Learning English
Teachers’ Views on the Role of Non-verbal Communication in EFL Classrooms
CRITICAL THINKING AND ONLINE LEARNING DURING THE PANDEMIC
An Inquiry on the Awareness of Education Students about the Issues and Trend...
THE CHALLENGE OF USING BLENDED LEARNING IN THE TEACHING OF ENGLISH AS A FOR...
DELEGATION OF DUTIES AS A PRINCIPALS’ MOTIVATIONAL PRACTICE ON TEACHER RETE...
Impact of Covid-19 on Education: Challenges Faced By Students, Teachers and ...
IMPACT OF COVID-19 PANDEMIC ON FILIPINO PASSENGERS’ CRUISE INTENTION
Effect of Sport Injuries on the Level of Confidence and Anxiety among Athlet...
THE IMPACT OF TRAINING AND DEVELOPMENT ON EMPLOYEES PERFORMANCE IN AN ORGAN...
Impact of Schemes Designed for Women in Goa on their Self-Esteem and Psychol...
A comparative study between the new and old university laws in Saudi Arabia
THE ECONOMIC IMPLICATIONS OF HIGH DEBTS PROFILE TO A DEVELOPING NATION: EMP...
Teachers’ Level of Knowledge and Attitude, Stressors and Coping Mechanisms I...
A Stereotypical Framework of Pressure, Trauma and Relief in Chitra Banerjee ...
Modular and Recorded Video Lessons in Teaching Music, Arts, Physical Educati...
Impact of Workplace Bullying to Work Performance among Filipino Cruise Staff
Ad

Recently uploaded (20)

PDF
O7-L3 Supply Chain Operations - ICLT Program
PPTX
PPH.pptx obstetrics and gynecology in nursing
PPTX
Cell Types and Its function , kingdom of life
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
Computing-Curriculum for Schools in Ghana
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PPTX
Institutional Correction lecture only . . .
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PPTX
Lesson notes of climatology university.
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
Insiders guide to clinical Medicine.pdf
PDF
RMMM.pdf make it easy to upload and study
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
O7-L3 Supply Chain Operations - ICLT Program
PPH.pptx obstetrics and gynecology in nursing
Cell Types and Its function , kingdom of life
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Computing-Curriculum for Schools in Ghana
TR - Agricultural Crops Production NC III.pdf
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Institutional Correction lecture only . . .
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
Renaissance Architecture: A Journey from Faith to Humanism
O5-L3 Freight Transport Ops (International) V1.pdf
Lesson notes of climatology university.
VCE English Exam - Section C Student Revision Booklet
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
Insiders guide to clinical Medicine.pdf
RMMM.pdf make it easy to upload and study
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx

The Potentials of Automated Feedback Systems for Instructors in Higher Education

  • 1. ISSN 2348-3156 (Print) International Journal of Social Science and Humanities Research ISSN 2348-3164 (online) Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com Page | 274 Research Publish Journals The Potentials of Automated Feedback Systems for Instructors in Higher Education Ammar Almutawa PhD Candidate. School of Computer Science, College of Engineering and Physical Sciences, University of Guelph, Canada Abstract: Automated feedback systems have been associated with significant improvements in the outcomes for learners in higher education. The advantages for learners are well reported, but surprisingly few articles have investigated the advantages of automated feedback for instructors. This article reviews the use of feedback and automated feedback systems in higher education to assist instructors to self-assess and to privately identify potential improvements to their instructional practices. This article first describes how feedback is currently provided in higher education settings and then discusses requirements, technology, and innovations needed to create automated feedback systems for instructors. The proposed automated feedback system aims to assist university instructors by providing suggestions and feedback that could help to self-examine their work privately and immediately. Keywords: Technologies Applied to Education, Feedback in Higher Education, Post-secondary Instructors, Automated Feedback Systems, Performance, Evaluation and Assessment. I. INTRODUCTION Feedback can help people to exchange ideas and take actions using their knowledge, experiences, and the level of understanding about a specific action or input (1; 2). Feedback can influence recipients to change their actions in response to the feedback (3). Feedback can be defined as the information provided to someone about a certain action or performance (3; 4; 5). Feedback is often generated by a human instructor or tutor; however, feedback can also be provided automatically using computer systems (6; 3). Feedback is a powerful and effective tool to provide information to educators and students (1; 2). In higher education, providing immediate or delayed feedback can impact the learning process (7; 8; 9). Feedback is mostly generated by a human instructor to assist students on their learning tasks (7). However, feedback can be also provided automatically using computer systems. Computerized learning systems can provide automated feedback to help students without human involvement (10; 11; 12; 13). Intelligent Tutoring Systems are an example of the computerized learning systems which can help students to complete or improve their learning tasks automatically (14; 15; 1). These automated feedback systems can facilitate students learning by providing feedback about their learning activities automatically. The goal of this review article is to study how feedback is delivered to postsecondary educators. It addresses the following two research questions: “How is feedback given to post-secondary instructors?” and “Is there potential for automated feedback to be helpful for post-secondary instructors?”. To accomplish these goals, we first explore definitions of feedback in higher education as well as how feedback is provided to students and instructors. The types of feedback are grouped into distinct categories each with advantages and disadvantages. Automated feedback systems and how they can benefit learners and instructors are explored in the final review section. The article concludes with a research vision of viable automated feedback systems to benefit post-secondary instructors.
  • 2. ISSN 2348-3156 (Print) International Journal of Social Science and Humanities Research ISSN 2348-3164 (online) Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com Page | 275 Research Publish Journals II. FEEDBACK IN HIGHER EDUCATION In post-secondary education, feedback is important for educators, learners, researchers, and administrators (5; 14; 15). Educators can provide feedback about students’ activities and learning processes so that learners can improve their performances. Students can provide feedback to instructors using the end of the semester evaluations which can provide information about course content as well as instructors’ performances (25; 26). Administrators can also use students supplied feedback to make decisions about new educational initiatives, the effectiveness of prior decisions, evaluating a specific program, or writing reports. (27; 28). Feedback can be delayed or immediate that are provided to students and instructors. The sooner feedback is produced the better results will be achieved (6). Immediate feedback can be used to improve educational processes than now often rely on delayed feedback (3; 6). In particular, the processes of providing feedback to instructors may be greatly improved if immediate feedback processes are employed Automatically. Automated feedback systems refer to any system that uses some data to generate information intended to be delivered to a specific user (23; 29). Automated feedback can also be defined as feedback provided right after a specific task or action is done or after automated feedback systems produced outcomes about specific inputs (23; 27; 6). In higher education, providing automated feedback can assist recipients to make better actions or changes about their learning tasks and produce more quality outcomes (7; 3; 6). For example, using grammar checker systems can assist students to receive automated feedback about their writing. This feedback could help to correct grammar and spelling mistakes without human involvement. Also, it can help to change particular words and select better choices that can enhance students’ writing tasks. Feedback for Students Feedback is an essential part of the teaching and learning process that can help students in their educational activities (30; 7; 15). The feedback provided to students can produce comments and useful information that show improvement and achievement. Those feedback can help to measure students’ works by comparing the current instructional situation with desired objectives in that specific instructional environment (14; 30; 5). This feedback is mostly used to provide information to learners and their abilities about some coursework activities and assignments’ results (7; 3). The feedback is generally occurred after students finish their educational tasks whether the feedback is delayed or immediate (4; 15). Delayed Feedback for Students Students can receive feedback that is generally provided by instructors about their learning tasks (26; 5). The normal interaction-feedback cycle happens when an instructor asks students to work on a task and waits for them to complete it. Then, the instructor evaluates the work the students have done and provide feedback about students’ outcomes (30; 5). Providing feedback to students can help address what issues, mistakes, or what parts need to improve (27; 18; 29). For example, when an instructor assigns some homework to students, they can work on that homework until they submit it. The instructor will review each student’s work. Then, the instructor can provide feedback to students about their homework results. Another way of providing delayed feedback to students is using peers’ feedback (26; 5). Classmates can help each other to work on learning activities and then provide feedback to each other (31; 32). Students can evaluate and provide feedback to peers about different learning tasks they work on as teams or even about individual activities (33; 34; 35). Students can follow predefined criteria provided by their instructor and use their knowledge and experience to evaluate their group members and provide feedback (5; 31). Peers intend to provide useful feedback to their teammates so then can make better outcomes, so the evaluation results are incorporated into the final grade (31). Delayed feedback to students can also be provided using computer-aided systems (1; 28). There are many automated feedback systems that produce feedback about learning tasks like sending students test away to be scored and waiting for the results (1; 28; 36). The feedback will be in a form of students’ grades that will be given to students sometimes in the future. In this article we are only considering automated feedback systems that provide immediate feedback. Immediate Feedback for Students Immediate feedback can be provided by instructors and automated feedback systems (16; 13). Instructors can provide immediate feedback about students’ activities by watching students working on their learning tasks and then providing feedback (3). Automated feedback systems can provide immediate help to students in more convenient time without the
  • 3. ISSN 2348-3156 (Print) International Journal of Social Science and Humanities Research ISSN 2348-3164 (online) Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com Page | 276 Research Publish Journals need to wait for instructors to watch them working on that learning tasks (6; 36; 37). These automated systems can simply provide feedback to a specific task like assignment confirmation after students’ submission or what grade they make (26; 13). Also, those automated feedback systems can provide complex information with more details including real-time feedback that can provide auto corrector and steps to improve their works (7; 14). Automated feedback can improve students’ performance by motivating them to be engaged in the learning process (3; 4; 38). Students can work on different learning activities and receive timely feedback after each step (7; 16; 39). Those steps can assist students to improve their learning and make better results. There are many automated systems and online educational platforms that can help students to receive feedback immediately (39; 30). Intelligent Tutoring Systems (ITS) and Massive Open Online Courses (MOOCs) are commonly used for supporting students and to automatically evaluate their assignments and provide feedback to their activities (40; 41; 42; 13). Grade-Mark is another online marking modules that can review the students’ work to show any plagiarism detection. The quality and accuracy of those automated feedback are high quality and have almost the same positive results as human feedback (26; 16; 14). Feedback for Instructors Feedback provided to instructors is usually described as information about students such as class averages or drop fail rates (43; 44). Instructors mostly receive objective feedback about students’ performance and class learning activities (43; 45). Feedback can also provide information to instructors that can make them aware about learning process like what issues related to class activities and if there are any missing pieces in class materials (23; 3; 18). This feedback can help instructors to take the appropriate proactive actions about specific situations such as to provide more help to students who are not doing well (44; 30). In general, feedback delivered to instructors can be a type of subjective evaluations to provide information that may help instructors in the future (46; 43). The feedback can be a summary report after the class concluded that show students’ performance, success rate, grades, etc. Students can use end of semester evaluations to provide feedback and evaluate instructors (47; 46). The outcomes of those evaluations can be used by experts, administrators, or experienced instructors to assess and provide feedback to the class instructor (47; 30). Those feedback intend to produce information to help instructors to make any changes they need regarding the learning process. Delayed Feedback for Instructors Delayed feedback can be produced to instructors by peers or colleagues, administrators, students, and computer systems (7; 29; 48). Peers and colleagues can use observation methods to assess instructors and then provide feedback using their experiences, knowledge, and skills (29; 32; 43). They can also use video recordings and review learning activities to provide late feedback to instructors (7; 31; 49). Students can evaluate and provide feedback to instructors using evaluation forms at the end of the instructional units (46; 24). Experts and administrators can use those evaluations and students’ feedback to be used to assess and provide late feedback to instructors (14; 33). Instructors can also use computer systems that provide information about the learning process like students’ grading, weekly summary, failing rates, etc. (29; 7). Using evaluation forms that happen at the end of each semester is an important way to understand the required improvements (38; 50; 2). Instructors typically receive late feedback from students using end of the semester evaluations (14; 46; 15). It is the most common but often ineffective mechanism for providing feedback to instructors (51; 52). The late feedback results from this method can help instructors to compare the actual learning outcomes with the designed objectives (26; 50). The feedback is usually a written subjective report that shows what areas where instructors are lacking or need improvements. However, this feedback can inform and support instructors after evaluating course materials to help to improve students’ learning activities and outcomes (14; 15; 7). Automated feedback systems can also provide delayed feedback to instructors (26; 7; 29). Those computer systems can show students’ performance, timelines, class summary, etc. (21; 20; 1) For example, instructors can use computer systems to check the total number of students who submitted their assignment on time, late submission, and who did not submit. Instructors can benefit from using those automated systems to save time checking students’ works. For instance, instructors can use Turnitin to check if students have any plagiarism or academic misconduct issues. Turnitin is a common automated feedback systems that can check if students’ works are original and highlight any potential plagiarism issues (26; 14). The feedback coming out of Turnitin is a type of late report that instructors can use to evaluate students and provide the needed feedback.
  • 4. ISSN 2348-3156 (Print) International Journal of Social Science and Humanities Research ISSN 2348-3164 (online) Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com Page | 277 Research Publish Journals Immediate Feedback for Instructors There are few studies that investigated providing immediate feedback to instructors (14; 15; 1). Immediate feedback is produced automatically right after finishing the learning task (6; 1; 30). Most of those automated feedback systems and online systems can provide feedback about students’ learning activities (13; 42). Instructors may benefit from using the feedback outcomes to assist them to have a clear idea about students’ performance. This can assist instructors directly to track students’ activities and mistakes and then provide the needed help to address those issues. For example, when using online learning systems, instructors can receive immediate feedback from students about whether they understand the material or not. Then, the instructor can provide the suitable help to students. III. DISCUSSION Providing feedback and automated feedback can offer advantages to students and instructors in post-secondary education (16; 14; 26). The feedback can be delayed and immediate depending on the recipients’ needs. The feedback will include an action, type of delivery, and components of feedback. Timely feedback can provide actionable information to higher education students and instructors. However, most of the existed methods of providing human and automated feedback aim to serve students in one way or another. The outcomes from the computerized learning systems like auto-graders, grammar checkers, etc. can facilitate learning tasks and provide useful information to students (26; 20; 23). Research to explore automated feedback systems for instructors’ advantage has not received the same amount of attention. This might be a result of the complicated and unclear ways to evaluate instructors’ performances and the possibility that automated feedback for instructors could be used for evaluation by institutions instead of for information to be used by the individuals. Students can be easily evaluated using their learning activities outcomes to be compared with predefined standards or rules. This way will help to provide the required feedback whether the feedback is delayed or immediate. However, in the instructor’s case, it is not an easy process to set standards and rules for instructors to follow when working in their learning activities. This makes it harder to design an automated feedback system that provide feedback for instructors’ benefits. Using the end of semester evaluations has been used for decades to evaluate and provide feedback about class instructors and course materials. This method has shown good advantages in higher education especially providing late feedback for instructors. However, this method requires time, knowledge, and skills from the evaluators (43; 7; 24). Also, there are few issues related to the end of the semester evaluation such as students’ honesty and how effectively those evaluations could measure the quality of courses. Those kinds of evaluations are only beneficial if students are honest when they do the evaluations and student honesty cannot easily be measured (53; 51). Also, the end of the semester evaluations is a type of summative and subjective measurement instead of formative and ongoing fair evaluations (52). In fact, providing feedback for instructors is a very complex process (51; 53; 52). In higher education, instructors are normally measured by their scholarly outputs because it is quantifiable but there is no complete way that can measure instructors’ success (51). Researchers claimed that providing immediate feedback to instructors is a new and interesting area of research (47; 46; 42). They mentioned that providing feedback to support instructors’ needs is a good way to improve course contents and instructional approaches that will result to enhance the learning process (28; 54; 39). Other studies claimed that using feedback can help to improve faculty performance, and they showed that feedback analysis is important to help educators to make better developments in classes (27; 19). Designing automated feedback systems for serving instructors can be a challenge and a complex task because of the different types of feedback and objectives (1; 29). The feedback is mostly used to identify mistakes which can help students to realize their errors, but instructors cannot easily adapt this way of feedback to their own advantages (20). This is because feedback provided to students can be evaluated and examined easier using their outcomes (55). Evaluating instructors’ feedback is a challenging task. Studies showed that instructors’ performance can be evaluated using examiners, peers’ reviews, or self-assessment (55). However, there are few studies investigated how instructors’ performance could be assessed. Most advantages of using automated feedback for instructors have been mostly drawn from a limited number of studies focusing only on online courses’ platforms (5; 7; 51). IV. THE POTENTIALS OF AUTOMATED FEEDBACK SYSTEMS FOR INSTRUCTORS Providing instructors with timely feedback can allow them to make changes to their practices. This immediate feedback can also facilitate the process of self-improvement and instructional change. The feedback produced automatically could show information to instructors about specific learning activities. For example, when instructors receive many emails
  • 5. ISSN 2348-3156 (Print) International Journal of Social Science and Humanities Research ISSN 2348-3164 (online) Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com Page | 278 Research Publish Journals from students, automated feedback systems could help to detect students’ common issues using the questions that students ask and then provide a summary to the class instructor about students who need help and what concerns they have. The feedback could help instructors to be aware of what they need to address and give more concerns without any human involvement. Also, the feedback would be in a timely manner so there is no need to wait until other people provide those feedback. Developing automated feedback systems that aim to provide immediate feedback to instructors could be done using data mining applied to instructional outputs (38; 56; 45). There is a lot of data available to help instructors to understand how their courses are going including course planning documents, grades, emails, discussion forums, assignment grading notes, quiz scores, lecture attendance, etc. Data mining can use those information resource to provide immediate feedback to instructors (28; 47; 54). Data mining tools and techniques can provide and predict information like detecting students who need help, preventing students from dropping out, predict learning performance, etc. (57; 58; 59). In addition, data mining techniques can be used to help discovering useful information about formative and summative evaluations and can also assist people to evaluate themselves (59; 21). Automated feedback systems can offer many benefits to instructors by producing feedback right after they finish their learning tasks. The automated feedback system can use instructional outcomes like course preparation, course outlines, course websites, assignments, lab descriptions, etc. as the data inputs to be processed and then provide feedback to instructors as the system outputs as shown in Figure 1. The Automated feedback systems for instructors could provide immediate feedback to instructors about teaching methods, skills, missing requirements, summary reports, and self- assessment. Figure 1: Automated Feedback System This article shows the possibilities of identifying a framework that can provide automated and immediate feedback to university instructors. The framework aims to help instructors to self-assess some parts of the learning process by using data mining techniques applied to various post-secondary instructions’ outputs. These instructions outcomes like course materials, chat logs, student questions, emails, assignment descriptions, etc. can be used to be the inputs for the framework to be processed automatically and then produce useful outputs. The feedback resulting from the framework could assist instructors to raise awareness of what part are missing of the learning process. Also, it could facilitate instructors to self-assess their performance immediately. Measuring the outcomes of the framework could be done by comparing the automated feedback outputs with human feedback to ensure accuracy and quality of the framework. V. CONCLUSION In conclusion, this article describes how feedback is produced in higher education to learners and educators. The feedback provided to students are mainly generated by instructors or automated feedback systems. The feedback provided to instructors are generated by students, peers, and administrators. Review of existing literature showed that limited works have been done regarding the automated feedback to instructors. Instead, many studies showed how instructors receive human or delayed feedback using students’ evaluations, self-evaluations, peers’ observations, videotape recording and reviewing, and staff supervisor consultation.
  • 6. ISSN 2348-3156 (Print) International Journal of Social Science and Humanities Research ISSN 2348-3164 (online) Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com Page | 279 Research Publish Journals Feedback provided to instructors is often delayed which prevents them from acting on the feedback in a timely fashion. In the big data era, there are some disadvantages of using delayed or human feedback. So, there is a need to explore a faster and effective way of providing feedback. Using automated feedback systems might be a solution to provide timely suggestions and useful information that could help instructors in a timely manner. Universities instructors will get good advantages from exploring and adopting more automated feedback systems. We propose a framework that can provide feedback to instructors automatically. The automated feedback systems for instructors are possible when the required data, measurements, standards, and evaluation tools are identified and understood. Those findings will help to improve the overall learning processes thus improving the educational objectives and quality of learning and outcomes. REFERENCES [1] T. Bimba, N. Idris, A. Al-Hunaiyyan, R. B. Mahmud, and N. L. B. M. Shuib, “Adaptive feedback in computer-based learning environments: a review,” Adaptive Behavior, vol. 25, pp. 217–234, Oct. 2017. [2] J. Busler, C. Kirk, J. Keeley, and W. Buskist, “What Constitutes Poor Teaching? A Preliminary Inquiry Into the Misbehaviors of Not-So-Good Instructors,” Teaching of Psychology, vol. 44, pp. 330–334, Oct. 2017. [3] J. Hattie and H. Timperley, “The Power of Feedback,” Review of Educational Research, vol. 77, pp. 81–112, Mar. 2007. [4] R. Rodgers, “Attending to Student Voice: The Impact of Descriptive Feedback on Learning and Teaching,” Curriculum Inquiry, vol. 36, pp. 209–237, Jan. 2006. [5] L.-T. Chen and L. Liu, “Instructor’s Self-Assessment of Content Design in Online Courses,” International Journal of Technology in Teaching and learning, p. 19, 2018. [6] G. Deeva, D. Bogdanova, E. Serral, M. Snoeck, and J. De Weerdt, “A review of automated feedback systems for learners: Classification framework, challenges and opportunities,” Computers & Education, vol. 162, p. 104094, Mar. 2021. [7] G. Cheng, S.-M. G. Chwo, J. Chen, D. Foung, V. Lam, and M. Tom, “Automatic Classification of Teacher Feedback and Its Potential Applications for EFL Writing,” International Conference on Computers in Education, p. 6, 2017. [8] M. T. Orr, L. Hollingworth, and J. Cook, “Embedding Performance Assessments for Leaders into Preparation: A Comparison of Approaches, Candidates, and Assessment Evidence,” Journal of School Leadership, vol. 28, pp. 294– 314, May 2018. [9] O. Ku, J.-K. Liang, S.-B. Chang, and M. Wu, “Sokrates Teaching Analytics System (STAS): An Automatic Teaching Behavior Analysis System for Facilitating Teacher Professional Development,” h International Conference on Computers in Education, p. 10, 2018. [10] N. Augar, C. J. Woodley, D. Whitefield, and M. Winchester, “Exploring academics’ approaches to managing team assessment,” International Journal of Educational Management, vol. 30, pp. 1150–1162, Aug. 2016. [11] J. McCuaig and J. Baldwin, “Identifying Successful Learners from Interaction Behaviour,” International Conference on Educational Data Mining, p. 4, 2012. [12] H. Keuning, J. Jeuring, and B. Heeren, “Towards a Systematic Review of Automated Feedback Generation for Programming Exercises – Extended Version,” ACM Conference, p. 20, 2016. [13] A. Agrawal, J. Venkatraman, and S. Leonard, “YouEDU: Addressing Confusion in MOOC Discussion Forums by Recommending Instructional Video Clips,” International Conference on Educational Data Mining, p. 8, 2015. [14] S. Burrows and M. Shortis, “An evaluation of semi-automated, collaborative marking and feedback systems: Academic staff perspectives,” Australasian Journal of Educational Technology, vol. 27, Nov. 2011. [15] J. Ranalli, S. Link, and E. Chukharev-Hudilainen, “Automated writing evaluation for formative assessment of second language writing: investigating the accuracy and usefulness of feedback as part of argument-based validation,” Educational Psychology, vol. 37, pp. 8–25, Jan. 2017.
  • 7. ISSN 2348-3156 (Print) International Journal of Social Science and Humanities Research ISSN 2348-3164 (online) Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com Page | 280 Research Publish Journals [16] T. Price, R. Zhi, and T. Barnes, “Evaluation of a Data-driven Feedback Algorithm for Open-ended Programming,” International Conference on Educational Data Mining, p. 6, 2017. [17] M. Kumar, M.-Y. Kan, B. C. Y. Tan, and K. Ragupathi, “Learning Instructor Intervention from MOOC Forums: Early Results and Issues,” International Conference on Educational Data Mining, p. 8, 2015. [18] V. J. Shute, “Focus on Formative Feedback,” Review of Educational Research, vol. 78, pp. 153–189, Mar. 2008. [19] A. Shibani, S. Knight, and S. Buckingham Shum, “Educator perspectives on learning analytics in classroom practice,” The Internet and Higher Education, vol. 46, p. 100730, July 2020. [20] H. Keuning, J. Jeuring, and B. Heeren, “A Systematic Literature Review of Automated Feedback Generation for Programming Exercises,” ACM Transactions on Computing Education, vol. 19, pp. 1–43, Jan. 2019. [21] Y. Xiong and Y.-F. B. Wu, “Understanding Students’ Perceptions of an Automated Feedback System: an Empirical Study Based on UTAUT,” Americas Conference on Information Systems, p. 10, 2019. [22] I. Koprinska, J. Stretton, and K. Yacef, “Students at Risk: Detection and Remediation,” International Conference on Educational Data Mining, p. 4, 2015. [23] L. Gusukuma, A. C. Bart, and D. Kafura, “Pedal: An Infrastructure for Automated Feedback Systems,” in Proceedings of the 51st ACM Technical Symposium on Computer Science Education, (Portland OR USA), pp. 1061– 1067, ACM, Feb. 2020. [24] A. Wang, S. Yu, L. Chen, X. Gao, and D. Wang, “Development of a Visualization based System for Analyzing Teachers’ Emotional Experience in Classroom Observation Activities,” International Conference on Computers in Education, p. 6, 2018. [25] Z. L. B. Karen D. Mattingly, Margaret C. Rice, “Learning analytics as a tool for closing the assessment loop in higher education,” Knowledge Management & E-Learning: An International Journal, pp. 236–247, Sept. 2012. [26] V. Clark-Gordon, N. D. Bowman, A. A. Hadden, and B. N. Frisby, “College instructors and the digital red pen: An exploratory study of factors influencing the adoption and non-adoption of digital written feedback technologies,” Computers & Education, vol. 128, pp. 414–426, Jan. 2019. [27] R. Lalit, K. Handa, and N. Sharma, “FUZZY BASED AUTOMATED FEEDBACK COLLECTION AND ANALYSIS SYSTEM,” Advances and Applications in Mathematical Sciences, vol. 18, no. 8, p. 10, 2019. [28] A. Lachner, C. Burkhart, and M. Nu¨ckles, “Formative computer-based feedback in the university classroom: Specific concept maps scaffold students’ writing,” Computers in Human Behavior, vol. 72, pp. 459–469, July 2017. [29] N. Tanoue, L. N. Korovin, M. Carton, C. A. Galvani, and I. Ghaderi, “Faculty feedback versus residents’ self- assessment of operative performance: Different but complementary,” The American Journal of Surgery, vol. 215, pp. 288–292, Feb. 2018. [30] J. Carvalho, “Ms thesis: The design of an educationally beneficial immediate feedback system,” MS thesis, p. 72, 2017. [31] M. J. Dingel, W. Wei, and A. Huq, “Cooperative learning and peer evaluation: The effect of free riders on team performance and the relationship between course performance and peer evaluation,” Journal of the Scholarship of Teaching and Learning, vol. 13, no. 1, p. 12, 2013. [32] J. S. Kane and E. E. L. Iii, “Methods of Peer Assessment,” p. 32, 1978. [33] P. Bedore and B. O’Sullivan, “Addressing Instructor Ambivalence about Peer Review and Self-Assessment,” Writing Program Administration - Journal of the Council of Writing Program Administrators, p. 27, 2011. [34] B. M. Thompson, J. D. Gonzalo, and R. E. Levine, “The power of the written word: team assessment of behaviour,” Medical Education, vol. 50, pp. 706– 708, July 2016. [35] N. Falchikov and J. Goldfinch, “Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks,” p. 36, 2000.
  • 8. ISSN 2348-3156 (Print) International Journal of Social Science and Humanities Research ISSN 2348-3164 (online) Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com Page | 281 Research Publish Journals [36] S. Dikli and S. Bleyle, “Automated Essay Scoring feedback for second language writers: How does it compare to instructor feedback?,” Assessing Writing, vol. 22, pp. 1–17, Oct. 2014. [37] Y. Yang and L. F. Cornelious, “Preparing Instructors for Quality Online Instruction,” Online Journal of Distance Learning Administration, p. 14, 2005. [38] G. Riahi, “E-learning Systems Based on Cloud Computing: A Review,” Procedia Computer Science, vol. 62, pp. 352–359, 2015. [39] Y. Akbulut and C. S. Cardak, “Adaptive educational hypermedia accommodating learning styles: A content analysis of publications from 2000 to 2011,” Computers & Education, vol. 58, pp. 835–842, Feb. 2012. [40] G. Kennedy, I. Ioannou, Y. Zhou, J. Bailey, and S. O’Leary, “Mining interactions in immersive learning environments for real-time student feedback,” Australasian Journal of Educational Technology, vol. 29, May 2013. [41] M. H. Qasem, R. Qaddoura, and B. Hammo, “Educational Data Mining (EDM): A Review,” Resrach gate: Conference: New Trends in Information Technology - (NTIT), p. 9, 2017. [42] F. Mi and B. Faltings, “Adaptive Sequential Recommendation for Discussion Forums on MOOCs using Context Trees,” International Conference on Educational Data Mining, p. 8, 2017. [43] A. E. Zacharoula Papamitsiou, “Learning Analytics and Educational Data Mining in Practice,” Educational Technology and Society, p. 16, 2014. [44] A. Dutt, M. A. Ismail, and T. Herawan, “A Systematic Review on Educational Data Mining,” IEEE Access, vol. 5, pp. 15991–16005, 2017. [45] J. Rusia, “DATA MINING TECHNIQUE ON SENTIMENT ANALYSIS AND COMPUTATION OF VIEWS,” International Journal of Engineering Applied Sciences and Technology, 2017. [46] LPU, Phagwara and A. Sharma, “A RESEARCH REVIEW ON COMPARATIVE ANALYSIS OF DATA MINING TOOLS, TECHNIQUES AND PARAMETERS,” International Journal of Advanced Research in Computer Science, vol. 8, pp. 523–529, Aug. 2017. [47] Romero and S. Ventura, “Educational Data Mining: A Review of the State of the Art,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 40, pp. 601–618, Nov. 2010. [48] M. Dingel and W. Wei, “Influences on peer evaluation in a group project: an exploration of leadership, demographics and course performance,” Assessment & Evaluation in Higher Education, vol. 39, pp. 729–742, Aug. 2014. [49] A. Planas-Llado´, L. Feliu, G. Arbat, J. Pujol, J. J. Sun˜ol, F. Castro, and C. Mart´ı, “An analysis of teamwork based on self and peer evaluation in higher education,” Assessment & Evaluation in Higher Education, vol. 46, pp. 191– 207, Feb. 2021. [50] A. M. Shahiri, W. Husain, and N. A. Rashid, “A Review on Predicting Student’s Performance Using Data Mining Techniques,” Procedia Computer Science, vol. 72, pp. 414–422, 2015. [51] T. T. York, C. Gibson, and S. Rankin, “Defining and Measuring Academic Success,” Practical Assessment Research and Evaluation, 2015. Publisher: University of Massachusetts Amherst. [52] K. Young, J. Joines, T. Standish, and V. Gallagher, “Student evaluations of teaching: the impact of faculty procedures on response rates,” Assessment & Evaluation in Higher Education, vol. 44, pp. 37–49, Jan. 2019. [53] L. McClain, A. Gulbis, and D. Hays, “Honesty on student evaluations of teaching: effectiveness, purpose, and timing matter!,” Assessment & Evaluation in Higher Education, vol. 43, pp. 369–385, Apr. 2018. [54] M. K. Kele¸s, “AN OVERVIEW: THE IMPACT OF DATA MINING APPLICATIONS ON VARIOUS SECTORS,” TECHNICAL JOURNAL, p. 5, 2017.
  • 9. ISSN 2348-3156 (Print) International Journal of Social Science and Humanities Research ISSN 2348-3164 (online) Vol. 10, Issue 1, pp: (274-282), Month: January - March 2022, Available at: www.researchpublish.com Page | 282 Research Publish Journals [55] J. Kim and K. R. Lee, “Effects of an examiner’s positive and negative feedback on self-assessment of skill performance, emotional response, and selfefficacy in Korea: a quasi-experimental study,” BMC Medical Education, vol. 19, p. 142, Dec. 2019. [56] O. A. Omitaomu, X. Li, and S. Zhou, “Optimization Based Data Mining Approach for Forecasting Real- Time Energy Demand,” p. 11, 2015. [57] S. Hussain, N. Abdulaziz Dahan, F. M. Ba-Alwi, and N. Ribata, “Educational Data Mining and Analysis of Students’ Academic Performance Using WEKA,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 9, p. 447, Feb. 2018. [58] C. Condon and M. Clifford, “Measuring Principle Performance,” Quality School LEADERSHIP, 2012. [59] C. Romero and S. Ventura, “Educational data mining: A survey from 1995 to 2005,” Expert Systems with Applications, vol. 33, pp. 135–146, July 2007.