Science, Technology, Engineering and Mathematics.
Open Access

MQPF: A MULTI-DIMENSIONAL QUALITY-AWARE PATH FUSION FRAMEWORK FOR QUESTION ANSWERING

Download as PDF

Volume 7, Issue 6, Pp 41-48, 2025

DOI: https://doi.org/10.61784/jcsee3086

Author(s)

XinYi Wang1, Bo Liu2*

Affiliation(s)

1National University of Defense Technology, Changsha 410073, Hunan, China.

2Academy of Military Sciences, Beijing 100091, China.

Corresponding Author

Bo Liu

ABSTRACT

In multi-hop question answering (MHQA) tasks, existing methods typically integrate multiple reasoning paths from knowledge graphs (KGs) and chains of thought (CoTs). Early KG-enhanced methods primarily focus on obtaining relevant knowledge but fail to consider the multi-dimensional quality of reasoning paths. Subsequent works filter paths but treat all retained paths as equally important without further differentiation. Although some recent works attempt to rank paths by quality, they only provide a relative order without quantifying the actual quality differences between paths. To address these limitations, we propose a Multi-dimensional Quality-aware Path Fusion (MQPF) framework. MQPF introduces a multi-dimensional evaluation mechanism that quantifies path quality from semantic, structural, and outcome-based dimensions. Based on the overall scores, MQPF first filters out low-quality paths to reduce noise and then assigns adaptive weights to the remaining paths according to their scores. This method effectively removes unreliable information and enhances the utilization of trustworthy information during reasoning. Experiments show that MQPF performs comparably to baselines on multiple datasets. Moreover, as a model-agnostic module, it can be used as a plug-and-play module to enhance the performance of existing multi-path reasoning methods.

KEYWORDS

Question answering; Large language model; Knowledge graph

CITE THIS PAPER

XinYi Wang, Bo Liu. MQPF: a multi-dimensional quality-aware path fusion framework for question answering. Journal of Computer Science and Electrical Engineering. 2025, 7(6): 41-48. DOI: https://doi.org/10.61784/jcsee3086.

REFERENCES

[1] Bi Z, Hajialigol D, Sun Z, et al. Stoc-tot: Stochastic tree-of-thought with constrained decoding for complex reasoning in multi-hop question answering. 2024. DOI: https://doi.org/10.48550/arXiv.2407.03687.

[2] Lee S, Shin J, Ahn Y, et al. Zero-shot multi-hop question answering via monte-carlo tree search with large language models. 2024. DOI: https://doi.org/10.48550/arXiv.2409.19382.

[3] Luo L, Li Y F, Haffari G, et al. Reasoning on graphs: Faithful and interpretable large language model reasoning. International Conference on Learning Representations. 2024a.

[4] Park J, Patel A, Khan O Z, et al. Graph-guided reasoning for multi-hop question answering in large language models. 2023. DOI: https://doi.org/10.48550/arXiv.2311.09762.

[5] Chen L Y, Tong P R, Jin Z M, et al. Plan-on-graph: Self-correcting adaptive planning of large language model on knowledge graphs. Advances in Neural Information Processing Systems (NeurIPS) 37. 2024. DOI: https://doi.org/10.48550/arXiv.2410.23875.

[6] Mayromatis C, Karypis G. Gnn-rag: Graph neural retrieval for large language model reasoning. 2024. DOI: https://doi.org/10.48550/arXiv.2405.20139.

[7] Li M, Miao S, Li P. Simple is effective: The roles of graphs and large language models in knowledge-graph-based retrieval-augmented generation. 2025. DOI: https://doi.org/10.48550/arXiv.2410.20724.

[8] Li S, He Y, Guo H, et al. 2024. Graphreader: Building graph-based agent to enhance long-context abilities of large language models. 2024. DOI: https://doi.org/10.48550/arXiv.2406.14550.

[9] Wang X, Wei J, Schuurmans D, et al. Self-consistency improves chain of thought reasoning in language models. 2023b. DOI: https://doi.org/10.48550/arXiv.2203.11171.

[10] Wang Y, Jiang B, Luo Y, et al. Reasoning on efficient knowledge paths: knowledge graph guides large language model for domain question answering. 2024. DOI: https://doi.org/10.48550/arXiv.2404.10384.

[11] Bi Z, Han K, Liu C, et al. 2025. Forest-of-thought: Scaling test-time compute for enhancing llm reasoning. 2025. DOI: https://doi.org/10.48550/arXiv.2412.09078.

[12] Tian Y, Song D, Wu Z, et al. Augmenting reasoning capabilities of llms with graph structures in knowledge base question answering. Findings of the Association for Computational Linguistics: EMNLP 2024, Association for Computational Linguistics, Miami, Florida, USA. 2024, 11967-11977.

[13] Ko T W, Jiang J Y, Cheng P J. Beyond independent passages: Adaptive passage combination retrieval for retrieval augmented open-domain question answering. 2025. DOI: https://doi.org/10.48550/arXiv.2507.04069.

[14] Khalifa M, Logeswaran L, Lee M, et al. Few-shot reranking for multi-hop qa via language model prompting.  Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Toronto, Canada. 2023, 15882-15897. DOI: 10.18653/v1/2023.acl-long.885.

[15] Cofala T, Astappiev O, Xion W, et al. Ragtifier: Evaluating rag generation approaches of state-of-the-art rag systems for the si-gir liverag competition. 2025. DOI: https://doi.org/10.48550/arXiv.2506.14412.

[16] Yi Z, Zeng D, Ling Z, et al. Attention basin: Why contextual position matters in large language models. 2025. DOI: https://doi.org/10.48550/arXiv.2508.05128.

[17] Chiang C H, Lee H Y. Over-reasoning and redundant calculation of large language models. Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, St. Julian’s, Malta. 2024, 161-169.

[18] Nayab S, Rossolini G, Simoni M, et al. Concise thoughts: Impact of output length on llm reasoning and cost. 2024. DOI: https://doi.org/10.48550/arXiv.2407.19825.

[19] Wang K, Duan F, Wang S, et al. Knowledge-driven cot: Exploring faithful reasoning in llms for knowledge-intensive question answering. 2023a. DOI: https://doi.org/10.48550/arXiv.2308.13259.

[20] Luo L, Zhao Z, Haffari G, et al. Graph-constrained reasoning: Faithful reasoning on knowledge graphs with large language models. 2024b. DOI: https://doi.org/10.48550/arXiv.2410.13080.

[21] Yih W t, Richardson M, Meek C, et al. The value of semantic parse labeling for knowledge base question answering. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Association for Computational Linguistics, Berlin, Germany. 2016, 201-206.

[22] Talmor A, Berant J. The web as a knowledge-base for answering complex questions. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Volume 1 (Long Papers), Association for Computational Linguistics, New Orleans, Louisiana. 2018, 641-651.

[23] Bollacker K, Evans C, Paritosh P, et al. Freebase: a collaboratively created graph database for structuring human knowledge.  Proceedings of the 2008 ACM SIGMOD international conference on Management of data  (SIGMOD '08). Association for Computing Machinery, New York, NY, USA, 2008, 1247-1250. DOI: https://doi.org/10.1145/1376616.1376746.

[24] Yang A, Yang B, Hui B, et al. Qwen2 technical report. 2024. DOI: https://doi.org/10.48550/arXiv.2407.10671.

[25] Touvron H, Martin L, Stone K, et al. Llama 2: Open foundation and fine-tuned chat models. 2023. DOI: https://doi.org/10.48550/arXiv.2307.09288.

[26] Grattafiori A, Dubey A, Jauhri A, et al. The llama 3 herd of models. 2024. DOI: https://doi.org/10.48550/arXiv.2407.21783.

[27] He X, Tian Y, Sun Y, et al. G-retriever: Retrieval-augmented generation for textual graph understanding and question answering. Advances in Neural Information Processing Systems, 2025,  37, 132876-132907.

[28] Hu Y, Lei Z, Zhang Z, et al. Grag: Graph retrieval-augmented generation. 2025. DOI: https://doi.org/10.48550/arXiv.2405.16506.

All published work is licensed under a Creative Commons Attribution 4.0 International License. sitemap
Copyright © 2017 - 2025 Science, Technology, Engineering and Mathematics.   All Rights Reserved.