A Review on-Advancements in Autonomous Navigation Robots for Visually Impaired People

Main Article Content

Purnima Hanumant Chabbi, Jaya Christiyan K.G, A.Ajina

Abstract

Recent advancements in autonomous navigation technologies have markedly facilitated the development of intelligent assistive systems, aimed at augmenting the mobility and autonomy of individuals with visual impairments. This review paper critically examines contemporary innovations in autonomous navigation robots tailored for the visually impaired, with particular emphasis on the integration of computer vision, multimodal sensor fusion, machine learning, and real-time obstacle avoidance strategies. Notably, the deployment of state-of-the-art object detection algorithms, including You Only Look Once (YOLO) and Faster R-CNN, has substantially enhanced the precision and efficiency of environmental perception in dynamic settings. The study delineates a range of system architectures—spanning wearable technologies to robotic guide platforms—and evaluates their operational efficacy across diverse spatial contexts, encompassing both indoor and outdoor environments. Furthermore, the role of artificial intelligence in fostering situational awareness and autonomous decision-making is explored, with a view to optimising user interaction and navigational safety. The paper also engages with prevailing challenges, such as cost-effectiveness, adaptability to heterogeneous environments, and intuitive user interface design. In conclusion, this review provides a comprehensive synthesis of the current state-of-the-art, identifies salient research gaps, and proposes strategic directions for future inquiry in the domain of autonomous assistive navigation for visually impaired.

Article Details

Section
Articles