Academic and research papers serve as valuable platforms for disseminating expertise and discoveries to diverse audiences. The growing volume of academic papers, with nearly 7 million new publications annually, presents a formidable challenge for students and researchers alike. Consequently, the development of research paper summarization tools has become crucial to distilling crucial insights efficiently. This study examines the effectiveness of pre-trained models like text-to-text transfer transformer (T5), bidirectional encoder representations from transformers (BERT), bidirectional and auto regressive transformer (BART), and pre-training with extracted gap-sentences for abstractive summarization (PEGASUS) on research papers, introducing a novel hybrid model merging extractive and abstractive techniques. Comparative analysis of summaries, recall-oriented understudy for gisting evaluation (ROUGE) and bilingual evaluation understudy (BLEU) score evaluations and author evaluation help evaluate the quality and accuracy of the generated summaries. This advancement contributes to enhancing the accessibility and efficiency of assimilating complex academic content, emphasizing the importance of advanced summarization tools in promoting the accessibility of academic knowledge.
Related topics: