Enhancing Federated Learning Convergence in Mobile Edge Computing Through Dynamic Community Adaptation

Main Article Content

Manikonda Srinivasa Sesha Sai, Sarala Patchala,Guru Kesava Dasu Gopisetty,V V Jaya Rama Krishnaiah,Kondapalli Tejaswi,Jidugu Mounika

Abstract

Mobile Edge Computing (MEC) allows computing at the network edge. It reduces delays and improves efficiency. Traditional machine learning models require a central server. This causes privacy concerns and high data transfer costs. Federated Learning (FL) solves this by training models locally. This method keeps data private and reduces transmission overhead. However, FL has challenges. Devices in MEC can join and leave at any time. This makes the learning process slow. Different devices have different resources. Some have more power, while others are weaker. This creates an imbalance. The paper presents a solution to these problems. The authors propose a dynamic training strategy. It adapts to changes in the network. They use multi-agent reinforcement learning. Each device adjusts its training based on network conditions. The method improves learning speed. It also balances resource use among devices. The proposed strategy includes meta-learning. This helps new devices learn quickly. New devices do not need to train from the start. They use past knowledge to speed up learning. This reduces training time and saves energy. The system ensures faster convergence of the FL model. The new approach improves accuracy. It also reduces training time and saves resources. The system outperforms other standard methods. The work has real-world applications. It can help in smart cities, healthcare and IoT devices, security systems, autonomous vehicles and industrial automation. Federated learning in MEC is important for distributed AI systems. It allows devices to collaborate without exposing private data. The dynamic community structure ensures flexibility.. It is useful in The proposed system is scalable and efficient. It can handle real-world scenarios effectively. The combination of reinforcement learning and meta-learning provides a strong solution. It reduces the burden on individual devices while improving overall performance. The paper contributes to MEC and FL research. It presents a practical and efficient solution. It balances accuracy, resource use and training speed. It can improve AI systems at the network edge. Future work can explore additional optimizations. Enhancing communication efficiency and exploring new learning techniques can further improve FL in MEC environments.

Article Details

Section
Articles