ENHANCING THE QUALITY OF ACADEMIC PAPER ABSTRACTS USING LARGE LANGUAGE MODELS: A CASE STUDY ON "DIGITAL ECONOMY" PAPERS IN CHINA NATIONAL KNOWLEDGE INFRASTRUCTURE (CNKI)
Volume 2, Issue 2, Pp 13-19, 2025
DOI: https://doi.org/10.61784/erhd3039
Author(s)
Lin Zhong1, ChaoMin Gao2*
Affiliation(s)
1Journal Editorial Department, Youjiang Medical University For Nationalities, Baise 533000, Guangxi, China.
2School of Business Administration, Baise University, Baise 533000, Guangxi, China.
Corresponding Author
ChaoMin Gao
ABSTRACT
To evaluate the writing quality of academic paper abstracts and explore the applicability of large language models (LLMs) in abstract optimization, this study selects 5,054 papers on the topic of "Digital Economy" from CNKI as samples. A quantitative scoring analysis assesses the abstracts' performance in four dimensions: research objective, research methodology, research results, and research conclusions. Additionally, abstracts with significant deficiencies are regenerated using LLMs and subsequently evaluated. The data reveal that 57.44% of the abstracts fail to effectively summarize the core content of the research, with particularly pronounced issues in the descriptions of research methodology and results. Abstracts generated by LLMs exhibit excellent structural integrity, logical coherence, and linguistic conciseness. The findings indicate that academic paper abstracts in China have significant deficiencies in expressing research methodology, results, and conclusions, necessitating improvements through technological means. Given their strong capability in abstract writing, LLMs should be utilized to enhance the quality of academic abstracts.
KEYWORDS
Academic paper abstract; Abstract quality; Large language model; Qwen; Scholarly communication
CITE THIS PAPER
Lin Zhong, ChaoMin Gao. Enhancing the quality of academic paper abstracts using large language models: a case study on "digital economy" papers in China National Knowledge Infrastructure (CNKI). Educational Research and Human Development. 2025, 2(2): 13-19. DOI: https://doi.org/10.61784/erhd3039.
REFERENCES
[1] Fu T, Ma J, Shao W, et al. Analysis of translation issues and discussion on writing methods of structured English abstracts in medical papers. Sci & Tech Communication, 2024, 16(1): 73-76+80.
[2] Wei J. Major problems and countermeasures in the writing of academic paper abstracts. J Guangxi Univ (Philos Soc Sci Ed), 2008, 30(6): 136-139.
[3] Zhang C, Niu X, Sun J, et al. Exploration of ChatGPT: Opportunities and challenges for academic publishing under large AI language models. Chinese J Sci Technol Stud, 2023, 34(4): 446-453.
[4] Wang X, Yuan W. On the standardization of academic paper abstract writing. J Xi'an Univ Posts Telecommun, 2007, (6): 146-148.
[5] Zhao G. On the standardized writing and arrangement of academic paper abstracts and keywords. Educ Cult Forum, 2022, 14(3): 105-108.
[6] Wei J. Major problems and countermeasures in the writing of academic paper abstracts. J Guangxi Univ (Philos Soc Sci Ed), 2008, 30(6): 136-139.
[7] Guo W. Common problems and solutions in academic paper abstract writing. Sci & Tech Communication, 2018, 10(10): 154-157.
[8] Zhou L. Common problems in scientific paper abstracts and the art of writing. Sci & Tech Communication, 2022, 14(16): 37-39.
[9] Wang H. Essentials and common issues in scientific paper abstract writing. J Xi'an Shiyou Univ (Nat Sci Ed), 2009, 24(1): 100-102.
[10] Zhang F, Zhou W. Common problems and modification strategies for Chinese scientific paper abstracts. Chinese J Sci Technol Stud, 2009, 20(4): 744-745.
[11] Anstey A. Writing style: abstract thoughts. Br J Dermatol, 2014, 171(2): 205-206.
[12] Ma Q. Editors should pay attention to the writing and standardization of paper abstracts. J Northwest Minzu Univ (Philos Soc Sci Ed), 2010, (6): 102-105.
[13] Zhang T, Ladhak F, Durmus E, et al. Benchmarking large language models for news summarization. Trans Assoc Comput Linguist, 2024, 12: 39-57.
[14] Yang X, Ma B, Li S, et al. A method for power-equal abstracts of educational texts based on large language models. Comput Eng, 2024, 50(7): 32-41.
[15] Li J, Huang R, Chen Y, et al. A method for summarizing judicial documents combining prompt learning and Qwen large language models. J Tsinghua Univ (Sci Technol), 2024, 64(12): 2007-2018.
[16] Wu N, Liu C, Liu J, et al. AIGC-driven research on ancient books automatic summarization: From natural language understanding to generation. Libr Forum, 2024, 44(9): 111-123.
[17] Hu T, Xie R, Ye W, et al. Automatic code summarization enhanced by project context. J Softw, 2023, 34(4): 1695-1710.
[18] Ding H, Fan Z, Guehring I, et al. Reasoning and planning with large language models in code development. In: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2024, 6480-6490.