Deep Learning-Based Optimization of an IoT-Enabled Big Data Analytics Architecture for Edge-Cloud Computing

Main Article Content

Himani Jain, Monika Saxena

Abstract

A proliferation of data generated by the Internet of Things (IoT) and Big Data Analytics (BDA) has revolutionized the decision-making process. In distributed IoT networks, real-time processing encounters obstacles. This research investigates the application of deep learning to edge-cloud computing to optimize BDA. The IoT Botnet Dataset, which comprises attributes of internet transactions, is employed. The procedure entails a number of stages, including as cleaning the data, use Term Frequency-Inverse Document Frequency (TF-IDF) Vectorization to extract features, and using Principal Component Analysis (PCA) to identify the most relevant features. A hybrid model is proposed in this research that combines Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Deep Autoencoders for data analysis. The hybrid model trained well, with a tendency toward reducing training and validation losses, which settled at 0.0063 and 1.1182e-04, respectively, after 100 iterations. At the Initial Threshold, the model demonstrated elevated precision (0.97) and recall (0.954), accompanied by a minimal false negative rate (FN) of 0.13% and a false positive rate (FP) of 9%. At the Final Threshold, recall improved to 0.996, reducing the FP rate to 6% and increasing the FN to 2.59%, while precision decreased marginally to 0.943. The hybrid model exhibited enhanced precision (0.943) and recall (0.996) when compared to the "Auto Encoder" model. However, it did demonstrate a marginally higher false positive (FP) rate of 6% and false negative (2.59%).

Article Details

Section
Articles