Publication Date: 2023/12/04
Abstract: Navigating indoor environments can be challenging for visually impaired people, particularly for wayfinding tasks. With tools like GPS, outdoor navigation is more feasible, however, when indoors, receiving low-precision location data and avoiding obscure obstacles pose a challenge. We propose an app that combines state-of-the-art advances in promptable image segmentation from computer vision and augmented reality to assist the visually impaired in indoor navigation. Due to a broader range of objects indoors, automatically detecting obstacles in real-time is challenging. The key idea in our approach is to use a faster variation of Meta’s Segment Anything Model (FastSAM) to segment objects in the user’s path. We use a generic indoor map of the environment to localize the user’s position and overlay AR arrows that guide their navigation. FastSAM’s zero-shot recognition capabilities allow us to automatically add nearby obstacles in real- time to the indoor map so the wayfinding can be updated to avoid these. Although FastSAM’s speed enables our app to be deployable in real-time, the performance tradeoff from the original model makes mask generation less precise. Overall, our app can detect larger obstacles, such as chairs and tables, at a high rate and generates optimal paths to reach a destination. Many existing indoor navigation systems highly depend on a detailed indoor map or an extensive 3D environment model and don’t account for dynamic obstacles. This system minimizes the amount of initial data needed and can account for obstacles that cannot be observed from a map.
Keywords: No Keywords Available
DOI: https://doi.org/10.5281/zenodo.10255025
PDF: https://ijirst.demo4.arinfotech.co/assets/upload/files/IJISRT23NOV1928.pdf
REFERENCES