Non-visual user interface using speech recognition and synthesis
Abstract
Non-visual environments are becoming important in supporting human activities through the use of man-machine systems. Here, a non-visual environment means a situation in which a screen and mouse are not available. For example, a user is occupied with some other task involving eye-hand coordination, or is visually disabled. In this paper, we propose 'Speech Pointer,' a user interface for non-visual environments using speech recognition and synthesis, whose aim is to enable direct access or pointing to textual information. We prototyped the speech pointer for browsing the Web. We also prototyped another non-visual user interface on top of an existing visual application in order to find an efficient method for extending the non-visual applications. The purpose of these activities is to increase the scope of human activities in non-visual environments.