ChitChatGuide: How Can A Guidance System with Large Language Models Impact Shopping Mall Experiences for People with Visual Impairments?
Abstract
To improve the exploration experience in shopping malls for people with visual impairments (PVI), it is important for them to select their destination and obtain information based on their interests. We enabled this by integrating a large language model (LLM) into a navigation system. The system allows users to find and grasp their surroundings through question answering with the map information and users' location. Users can also receive personalized descriptions of surroundings during navigation, with the length adjusted based on transit time. We conducted a study in a shopping mall with 11 PVI, which revealed that the system enabled them to explore the facility with increased enjoyment. The study showed ChitChatGuide could potentially encourage PVI to visit stores they did not know. Based on the results, we discuss the criteria for integrating LLMs into navigation systems for PVI.