About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
MobileHCI 2024
Conference paper
ChitChatGuide: How Can A Guidance System with Large Language Models Impact Shopping Mall Experiences for People with Visual Impairments?
Abstract
To improve the exploration experience in shopping malls for people with visual impairments (PVI), it is important for them to select their destination and obtain information based on their interests. We enabled this by integrating a large language model (LLM) into a navigation system. The system allows users to find and grasp their surroundings through question answering with the map information and users' location. Users can also receive personalized descriptions of surroundings during navigation, with the length adjusted based on transit time. We conducted a study in a shopping mall with 11 PVI, which revealed that the system enabled them to explore the facility with increased enjoyment. The study showed ChitChatGuide could potentially encourage PVI to visit stores they did not know. Based on the results, we discuss the criteria for integrating LLMs into navigation systems for PVI.