Multimodal Deep Learning Framework for Cross-Cultural Dance Fusion Recognition: Case Study on Indian Classical and Western Motion Patterns

Main Article Content

K R.Soujanya, Bhavana.R.Maale

Abstract

A multimodal deep learning framework for the automated recognition and classification of movement patterns in cross-cultural dance fusion, specifically between Indian classical and Western styles. The model integrates three distinct data modalities: skeletal motion trajectories, rhythmic audio features, and semantic gesture annotations. By employing spatio-temporal analysis, the framework learns to disentangle and identify characteristic motion signatures from each tradition. To ensure interpretability, the architecture incorporates attention mechanisms that visualize the model's focus on diagnostically significant movements and rhythmic cues. Evaluated on a curated dataset, the proposed method demonstrates robust performance in detecting choreographic fusion, quantified through a novel fusion index. This work contributes to the fields of dance informatics and digital heritage by providing a scalable, analytical tool for understanding cultural-artistic synthesis in performing arts.

Article Details

Section
Articles