About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICML 2024
Workshop paper
Humans Linguistically Align to their Conversational Partners, and Language Models Should Too
Abstract
Humankind has honed its language system over thousands of years to engage in statistical learning and form predictions about upcoming input, often based on properties of or prior conversational experience with a specific conversational partner. Large language models, however, do not adapt their language in a user-specific manner. We argue that AI and ML researchers and developers should not ignore this critical component of human language processing, but instead, incorporate it into LLM development, and that doing so will improve LLM conversational performance, as well as users’ perceptions of models on dimensions such as accuracy and task success.