Our results show that electrostatic haptic feedback enables people with visual impairments to recognize and understand graphic elements like rooms and a floor plan. Second, the generation of a mental map when exploring an interactive detailed floor plan with several rooms was evaluated. The evaluation was conducted in two stages with eight participants with visual impairments: First, it was investigated how individual rooms can be identified and assigned with electrostatic tactile feedback over a common dot matrix display. Besides audiohaptically exploring tactile floor plans, our prototype also allows for voice interaction and demonstrates the control of smart home devices in this context. This paper suggests presenting tactile floor plans using surface haptic feedback on an electrostatic display to overcome these limitations. Such plans are usually fabricated physically in a time-consuming process and are not interactive. Tactile models, such as floor plans of a familiar or unfamiliar environment, can be helpful for people with visual impairments to grasp and interpret spatial information. This thesis contributes both on a theoretical level providing in-depth observations of the navigation process and also on a pragmatic level providing prototypes and guidelines for designers of the future navigation system for people with visual impairments. Based on those insights, we design three navigation systems prototypes – a collaborative navigation system, a landmark- based navigation system, and a conversational navigation agent. In a series of experiments, we provide insights into verbal description sharing habits and tele-assistance navigation strategies. This thesis aims to explore the role of verbal descriptions in navigation from the perspective of human-computer interaction. However, currently available navigation systems do not support navigation and orientation skills of people with visual impairments in the sense of supporting the creation of proper route knowledge. There are many research directions in this field from sensory substitution, obstacle detection aids to different kind of navigation systems based on various input and output modalities. This thesis focuses on the problem of navigation of people with visual impairments. When using NavCog, users tend to rely on the navigation system and less on their prior knowledge and therefore virtual navigation did not significantly improve users' performance. In the real-world, we found that users were able to take advantage of this knowledge, acquired completely through virtual navigation, to complete unassisted navigation tasks. In virtual navigation, we analyzed the evolution of route knowledge and we found that participants were able to quickly learn shorter routes and gradually increase their knowledge in both short and long routes. With these goals in mind, we conducted a user study where 14 blind participants virtually learned routes at home for three consecutive days and then physically navigated them, both unassisted and with NavCog. Our main research goals are to understand the acquisition of route knowledge through smartphone-based virtual navigation and how it evolves over time its ability to support independent, unassisted real-world navigation of short routes and its ability to improve user performance when using an accurate in-situ navigation tool (NavCog). Navigation assistive technologies try to provide additional support by guiding users or increasing their knowledge of the surroundings, but accurate solutions are still not widely available.īased on this limitation and on the fact that spatial knowledge can also be acquired indirectly (prior to navigation), we developed an interactive virtual navigation app where users can learn unfamiliar routes before physically visiting the environment. Independent navigation is challenging for blind people, particularly in unfamiliar environments.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |