Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring

Interpolation end reference image.
Jiazheng Li1, Hainiu Xu1, Zhaoyue Sun1,4, Yuxiang Zhou1, David West2, Cesare Aloisi2, Yulan He1,3
1King’s College London, 2AQA, 3The Alan Turing Institute, 4University of Warwick

Abstract

Generating rationales that justify scoring decisions has been a promising way to facilitate explainability in automated scoring systems. However, existing methods do not match the accuracy of classifier-based methods. Plus, the generated rationales often contain hallucinated information.

To address these issues, we propose a novel framework capable of generating more faithful rationales and, more importantly, matching performance with classifier-based black-box scoring systems. We first mimic the human assessment process by querying Large Language Models (LLMs) to generate a thought tree. We then summarise intermediate assessment decisions from each thought tree path for creating synthetic rationale data and rationale preference data. Finally, we utilise the generated synthetic data to calibrate LLMs through a two-step training process: supervised fine-tuning and preference optimization.

Extensive experimental results demonstrate that our framework achieves a 38% assessment performance improvement in the QWK score compared to prior work while producing higher-quality rationales, as recognised by human evaluators and LLMs. Our work sheds light on the effectiveness of performing preference optimization using synthetic preference data obtained from thought tree paths.

Video

Other Related Works From Us

We have other works that related to this paper:

EMNLP 2023 Distilling ChatGPT for Explainable Automated Student Answer Assessment first work to generate assessment rationale using ChatGPT.

AERA Chat: An Interactive Platform for Automated Explainable Student Answer Assessment an interactive platform for using or annotating LLM generated assessment rationales.

EMNLP 2024 Eliminating Biased Length Reliance of Direct Preference Optimization via Down-Sampled KL Divergence we study the biased length reliance of direct preference optimization and propose a new efficient algorithm.

BibTeX

@misc{li2024thoughttree,
        title={Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring}, 
        author={Jiazheng Li and Hainiu Xu and Zhaoyue Sun and Yuxiang Zhou and David West and Cesare Aloisi and Yulan He},
        year={2024},
        eprint={2406.19949},
        archivePrefix={arXiv},
        primaryClass={cs.CL},
        url={https://arxiv.org/abs/2406.19949}, 
  }