RoMe :一个可用于评估自然语言生成的强大指标
ACL 2022论文的Pytorch代码: RoMe :用于评估自然语言生成[PDF]的强大指标。
安装(Anaconda)
RoMe
pip install -r requirements.txt
chmod +x setup.sh
./setup.sh\”>
conda create -n RoMe -y python=3.8 && source activate RoMe
pip install -r requirements.txt
chmod +x setup.sh
./setup.sh
跑步
python RoMe .py
NB: RoMe的组成部分是高度参数敏感的。我们建议用户在适应不同域或数据集的代码时尝试不同的参数。
引用
如果使用代码,请引用以下论文。
RoMe,
title = \”{R}o{M}e: A Robust Metric for Evaluating Natural Language Generation\”,
author = \”Rony, Md Rashad Al Hasan and
Kovriguina, Liubov and
Chaudhuri, Debanjan and
Usbeck, Ricardo and
Lehmann, Jens\”,
booktitle = \”Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)\”,
month = may,
year = \”2022\”,
address = \”Dublin, Ireland\”,
publisher = \”Association for Computational Linguistics\”,
url = \”https://ac*l**anthology.org/2022.acl-long.387\”,
pages = \”5645–5657\”,
abstract = \”Evaluating Natural Language Generation (NLG) systems is a challenging task. Firstly, the metric should ensure that the generated hypothesis reflects the reference{\’}s semantics. Secondly, it should consider the grammatical quality of the generated sentence. Thirdly, it should be robust enough to handle various surface forms of the generated sentence. Thus, an effective evaluation metric has to be multifaceted. In this paper, we propose an automatic evaluation metric incorporating several core aspects of natural language understanding (language competence, syntactic and semantic variation). Our proposed metric, RoMe , is trained on language features such as semantic similarity combined with tree edit distance and grammatical acceptability, using a self-supervised neural network to assess the overall quality of the generated sentence. Moreover, we perform an extensive robustness analysis of the state-of-the-art methods and RoMe . Empirical results suggest that RoMe has a stronger correlation to human judgment over state-of-the-art metrics in evaluating system-generated sentences across several NLG tasks.\”,
}\”>
@inproceedings{rony-etal-2022- RoMe ,
title = \"{R}o{M}e: A Robust Metric for Evaluating Natural Language Generation\",
author = \"Rony, Md Rashad Al Hasan and
Kovriguina, Liubov and
Chaudhuri, Debanjan and
Usbeck, Ricardo and
Lehmann, Jens\",
booktitle = \"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)\",
month = may,
year = \"2022\",
address = \"Dublin, Ireland\",
publisher = \"Association for Computational Linguistics\",
url = \"https://ac*l**anthology.org/2022.acl-long.387\",
pages = \"5645--5657\",
abstract = \"Evaluating Natural Language Generation (NLG) systems is a challenging task. Firstly, the metric should ensure that the generated hypothesis reflects the reference{\'}s semantics. Secondly, it should consider the grammatical quality of the generated sentence. Thirdly, it should be robust enough to handle various surface forms of the generated sentence. Thus, an effective evaluation metric has to be multifaceted. In this paper, we propose an automatic evaluation metric incorporating several core aspects of natural language understanding (language competence, syntactic and semantic variation). Our proposed metric, RoMe , is trained on language features such as semantic similarity combined with tree edit distance and grammatical acceptability, using a self-supervised neural network to assess the overall quality of the generated sentence. Moreover, we perform an extensive robustness analysis of the state-of-the-art methods and RoMe . Empirical results suggest that RoMe has a stronger correlation to human judgment over state-of-the-art metrics in evaluating system-generated sentences across several NLG tasks.\",
}
执照
麻省理工学院
?接触
有关更多信息,请联系相应的作者MD Rashad Al Hasan Rony(电子邮件)。
