The 4th Workshop on Human Evaluation of NLP Systems (HumEval’24)

News

  • 20 May: Our accepted papers are online!
  • 05 May: The workshop programme is online!
  • 15 Nov: The fourth edition of HumEval will be held at LREC-COLING 2024!
  • 1 Dec: Mark Diaz, Google Research, and Sheila Castilho, ADAPT/DCU confirmed as keynote speakers!
  • 8 Jan: Call for Papers published

Invited Speakers

Mark J Díaz Margaret Mitchell
Mark J Díaz, Google Research Sheila Castilho, ADAPT/DCU

Workshop Topic and Content

The HumEval workshops (previously at EACL 2021, ACL 2022 and RANLP 2023) aim to create a forum for current human evaluation research and future directions, a space for researchers working with human evaluations to exchange ideas and begin to address the issues human evaluation in NLP faces in many respects, including experimental design, meta-evaluation and reproducibility. We invite papers on topics including, but not limited to, the following topics as addressed in any subfield of NLP:

  • Experimental design and methods for human evaluations
  • Reproducibility of human evaluations
  • Inter-evaluator and intra-evaluator agreement
  • Ethical considerations in human evaluation of computational systems
  • Quality assurance for human evaluation
  • Crowdsourcing for human evaluation
  • Issues in meta-evaluation of automatic metrics by correlation with human evaluations
  • Alternative forms of meta-evaluation and validation of human evaluations
  • Comparability of different human evaluations
  • Methods for assessing the quality and the reliability of human evaluations
  • Role of human evaluation in the context of Responsible and Accountable AI

We welcome work from any subfield of NLP (and ML/AI more generally), with a particular focus on evaluation of systems that produce language as output.