CaseSumm is available on HuggingFace: https://huggingface.co/datasets/ChicagoHAI/CaseSumm
Under Review at NAACL 2025.
Abstract: This paper introduces a novel dataset, CaseSumm, for long-context summarization in the legal domain, addressing the need for longer and more complex datasets for summarization evaluation. We collect 25,611 U.S. Supreme Court (SCOTUS) opinions and their official summaries (called syllabuses). Our dataset is the first to include summaries of SCOTUS decisions dating back to 1815, resulting in the largest open legal case summarization dataset.
We also present a comprehensive evaluation of LLM-generated summaries using both automatic metrics and expert human evaluation, revealing discrepancies between these assessment methods. Our evaluation shows Mistral 7b, a smaller open-source model, outperforms larger models on automatic metrics and successfully generates syllabus-like summaries. In contrast, human expert annotators indicate that Mistral summaries contain hallucinations and annotators consistently rank GPT-4 summaries as clearer and exhibiting greater sensitivity and specificity. Our analysis identifies specific hallucinations in generated summaries, such as precedent citation errors and misrepresentations of case facts. These findings demonstrate the limitations of current automatic evaluation methods for legal summarization and underscore the critical role of human evaluation in assessing summary quality, particularly in complex, high-stakes domains.