Mitigating Adversarial and AI-Evasion Attacks in Cybersecurity: Challenges and Solutions

Main Article Content

Murali Mohan Josyula , M. Saidireddy

Abstract

Adversarial and AI-evasion attacks pose significant threats to the integrity and reliability of cybersecurity systems that leverage artificial intelligence (AI). These sophisticated attacks manipulate input data to deceive AI models, leading to erroneous classifications and compromised security measures. This paper explores the inherent challenges presented by adversarial and AI-evasion attacks and proposes a novel hybrid defense mechanism to mitigate these threats. Our approach integrates anomaly detection, input sanitization, robust AI training, and a multi-stage defense strategy to enhance model resilience without sacrificing performance accuracy. Experimental evaluations on benchmark and custom cybersecurity datasets demonstrate the effectiveness of our solution, achieving an 85% reduction in attack success rates while maintaining over 90% classification accuracy. This research contributes to the development of more secure and reliable AI-driven cybersecurity frameworks, addressing the evolving landscape of adversarial threats

Article Details

Section
Articles