The rapid increase in the number of submissions to scientific conferences and journals has made the assignment of qualified reviewers a critical and challenging task. Manual assignment and simple keyword-based methods do not scale well and often fail to capture the semantic diversity and interdisciplinarity of modern research. This thesis addresses the reviewer assignment problem by investigating content-based methods that improve how the topical relevance between papers and reviewers is estimated, while remaining interpretable and applicable at conference scale. The thesis makes three main contributions. First, it introduces and analyzes a large real-world dataset of conference submissions and reviewer profiles in the areas of Computer Science and the Semantic Web, enabling realistic and reproducible evaluation of reviewer matching methods. Second, it systematically studies different ways of representing papers and reviewers using textual information, structured knowledge, and automatically extracted keywords. In particular, it compares traditional text-based similarity approaches with knowledge-graph–enriched representations and keyword-based profiles generated with large language models, showing how these complementary signals improve the estimation of reviewer expertise. Third, the thesis proposes an automated framework in which large language models assist the reviewer assignment process by producing concise, human-readable keyword representations that support transparent and effective matching. Extensive experiments demonstrate that combining lexical overlap and semantic similarity leads to more robust reviewer recommendations under realistic assignment constraints, such as limited reviewer capacity and conflict avoidance. The results show that large language models and knowledge-based representations can significantly enhance reviewer matching while preserving interpretability and auditability. Overall, this work provides practical and scalable methods that support program chairs and editors in assigning reviewers more accurately and fairly in modern peer-review workflows.
Semantic Reviewer Matching at Conference Scale with LLM-Generated Keywords and Knowledge-Graph Profiles
BAGHERI, FARID
2026
Abstract
The rapid increase in the number of submissions to scientific conferences and journals has made the assignment of qualified reviewers a critical and challenging task. Manual assignment and simple keyword-based methods do not scale well and often fail to capture the semantic diversity and interdisciplinarity of modern research. This thesis addresses the reviewer assignment problem by investigating content-based methods that improve how the topical relevance between papers and reviewers is estimated, while remaining interpretable and applicable at conference scale. The thesis makes three main contributions. First, it introduces and analyzes a large real-world dataset of conference submissions and reviewer profiles in the areas of Computer Science and the Semantic Web, enabling realistic and reproducible evaluation of reviewer matching methods. Second, it systematically studies different ways of representing papers and reviewers using textual information, structured knowledge, and automatically extracted keywords. In particular, it compares traditional text-based similarity approaches with knowledge-graph–enriched representations and keyword-based profiles generated with large language models, showing how these complementary signals improve the estimation of reviewer expertise. Third, the thesis proposes an automated framework in which large language models assist the reviewer assignment process by producing concise, human-readable keyword representations that support transparent and effective matching. Extensive experiments demonstrate that combining lexical overlap and semantic similarity leads to more robust reviewer recommendations under realistic assignment constraints, such as limited reviewer capacity and conflict avoidance. The results show that large language models and knowledge-based representations can significantly enhance reviewer matching while preserving interpretability and auditability. Overall, this work provides practical and scalable methods that support program chairs and editors in assigning reviewers more accurately and fairly in modern peer-review workflows.| File | Dimensione | Formato | |
|---|---|---|---|
|
Ph.pdf
accesso aperto
Licenza:
Tutti i diritti riservati
Dimensione
2.33 MB
Formato
Adobe PDF
|
2.33 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/362921
URN:NBN:IT:UNICA-362921