Hazar Yueksel, Ramon Bertran, et al.
MLSys 2020
We evaluate the robustness of several large language models on multiple datasets. Robustness here refers to the relative insensitivity of the model's answers to meaning-preserving variants of their input. Benchmark datasets are constructed by introducing naturally-occurring, non-malicious perturbations, or by generating semantically equivalent paraphrases of input questions or statements. We further propose a novel metric for assessing a model robustness, and demonstrate its benefits in the non-adversarial scenario by empirical evaluation of several models on the created datasets.
Hazar Yueksel, Ramon Bertran, et al.
MLSys 2020
Megh Thakkar, Quentin Fournier, et al.
ACL 2024
Natalia Martinez Gil, Dhaval Patel, et al.
UAI 2024
Chulin Xie, Keli Huang, et al.
ICLR 2020