Adversarial Voice Recognition: Protection Against Cyber Attacks
Main Article Content
Abstract
This study presents new configurations of the Internet of Things that use reinforcement learning. The NIST National Vulnerability Database (NVD) independently assessed voice-activated devices at 7.6 out of 10, which is an alarming risk factor. Our investigation of inaudible assaults on these devices validates this. A scenario is shown in our basic network model where an attacker gains unauthorised access to sensitive information on a protected laptop by using inaudible voice instructions. By running a battery of attack simulations on this basic network model, we were able to demonstrate how easily privileged information can be discovered and owned via physical access on a large scale, all without the need for additional hardware or enhanced device capabilities. After testing six different reinforcement learning algorithms in Microsoft's CyberBattleSim framework, we settled on Deep-Q learning with exploitation as the best option, which allowed us to quickly take possession of all nodes with little effort. Because of the proliferation of mobile devices, voice activation, and non-linear microphones that are vulnerable to stealth attacks in the near-ultrasound or inaudible ranges, our research highlights the urgent need to comprehend non-conventional networks and develop new cybersecurity measures to protect them. Since the inaudible attacks originate in the microphone design and digital signal processing, there may be more digital voice assistants than humans by 2024, and the only way to address them is via standard patches or firmware updates.