Bridging Silence and Semantics: A Multimodal Review of Sign Language Recognition, Translation, and Adaptive Learning Systems

Main Article Content

Antony Jacob , Angela Mary Anil , Alisha Ann Subash , Agnus Roy , Anu Rose Joy

Abstract

Bridging communication gaps for the deaf and mute community remains an open AI challenge, demanding systems that go beyond static sign recognition toward adaptive, emotion-aware interaction. While existing research has advanced isolated gesture recognition, few works address dynamic sentence translation, con- textual understanding, and learner adaptability in real-world envi- ronments. This review analyzes recent developments in multimodal learning—integrating vision, text, and speech—to enable seamless bidirectional communication and personalized education. It highlights the evolution from CNN-based recognition to transformer-driven sign language understanding and avatar-based delivery. This review synthesizes emerging multimodal approaches that blend recognition, translation, and emotion-aware adaptation into a unified assistive learning framework.

Article Details

Section
Articles